Search (218 results, page 1 of 11)

  • × language_ss:"e"
  • × theme_ss:"Internet"
  1. Capps, M.; Ladd, B.; Stotts, D.: Enhanced graph models in the Web : multi-client, multi-head, multi-tail browsing (1996) 0.03
    0.032156922 = product of:
      0.112549216 = sum of:
        0.09482904 = weight(_text_:interpretation in 5860) [ClassicSimilarity], result of:
          0.09482904 = score(doc=5860,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.4430163 = fieldWeight in 5860, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5860)
        0.017720178 = product of:
          0.035440356 = sum of:
            0.035440356 = weight(_text_:22 in 5860) [ClassicSimilarity], result of:
              0.035440356 = score(doc=5860,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.2708308 = fieldWeight in 5860, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5860)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Richer graph models permit authors to 'program' the browsing behaviour they want WWW readers to see by turning the hypertext into a hyperprogram with specific semantics. Multiple browsing streams can be started under the author's control and then kept in step through the synchronization mechanisms provided by the graph model. Adds a Semantic Web Graph Layer (SWGL) which allows dynamic interpretation of link and node structures according to graph models. Details the SWGL and its architecture, some sample protocol implementations, and the latest extensions to MHTML
    Date
    1. 8.1996 22:08:06
  2. Access to electronic information, services and networks : an interpretation of the LIBRARY BILL OF RIGHTS (1995) 0.02
    0.01642145 = product of:
      0.11495014 = sum of:
        0.11495014 = weight(_text_:interpretation in 4713) [ClassicSimilarity], result of:
          0.11495014 = score(doc=4713,freq=4.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.5370168 = fieldWeight in 4713, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=4713)
      0.14285715 = coord(1/7)
    
    Abstract
    At the 1996 Midwinter Meeting of the 57.000-member ALA in San Antonio, ALA affirms user rights in cyberspace; and calls on the US Congress to protect public access to information during the shift from print to electronic publishing. The latest ALA News over the net reported what Betty J. Turock, president of the ALA said, 'free access to information is essential to a democracy. Our concern as professional librarians is that new technology not become a barrier for members of the public.' The new 'Access to Electronic Information, Services and Network: an interpretation of the Library Bill of Rights' was adopted by the ALA Council at the Midwinter Meeting, and will have profound implications and use for many libraries and librarians in the months to come. Because of its significance and potential impact, the next of this document has been downloaded from the ALA's Web site at http://www.ala.org to facilitate the use by readers of this journal
  3. Hochheiser, H.; Shneiderman, B.: Using interactive visualizations of WWW log data to characterize access patterns and inform site design (2001) 0.02
    0.01642145 = product of:
      0.11495014 = sum of:
        0.11495014 = weight(_text_:interpretation in 5765) [ClassicSimilarity], result of:
          0.11495014 = score(doc=5765,freq=4.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.5370168 = fieldWeight in 5765, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=5765)
      0.14285715 = coord(1/7)
    
    Abstract
    HTTP server log files provide Web site operators with substantial detail regarding the visitors to their sites. Interest in interpreting this data has spawned an active market for software packages that summarize and analyze this data, providing histograms, pie graphs, and other charts summarizing usage patterns. Although useful, these summaries obscure useful information and restrict users to passive interpretation of static displays. Interactive visualizations can be used to provide users with greater abilities to interpret and explore Web log data. By combining two-dimensional displays of thousands of individual access requests, color, and size coding for additional attributes, and facilities for zooming and filtering, these visualizations provide capabilities for examining data that exceed those of traditional Web log analysis tools. We introduce a series of interactive visualizations that can be used to explore server data across various dimensions. Possible uses of these visualizations are discussed, and difficulties of data collection, presentation, and interpretation are explored
  4. Hochheiser, H.; Shneiderman, B.: Understanding patterns of user visits to Web sites : Interactive Starfield visualizations of WWW log data (1999) 0.02
    0.01642145 = product of:
      0.11495014 = sum of:
        0.11495014 = weight(_text_:interpretation in 6713) [ClassicSimilarity], result of:
          0.11495014 = score(doc=6713,freq=4.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.5370168 = fieldWeight in 6713, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=6713)
      0.14285715 = coord(1/7)
    
    Abstract
    HTTP server log files provide Web site operators with substantial detail regarding the visitors to their sites. Interest in interpreting this data has spawned an active market for software packages that summarize and analyze this data, providing histograms, pie graphs, and other charts summarizing usage patterns. While useful, these summaries obscure useful information and restrict users to passive interpretation of static displays. Interactive starfield visualizations can be used to provide users with greater abilities to interpret and explore web log data. By combining two-dimensional displays of thousands of individual access requests, color and size coding for additional attributes, and facilities for zooming and filtering, these visualizations provide capabilities for examining data that exceed those of traditional web log analysis tools. We introduce a series of interactive starfield visualizations, which can be used to explore server data across various dimensions. Possible uses of these visualizations are discussed, and difficulties of data collection, presentation, and interpretation are explored
  5. Thelwall, M.; Vann, K.; Fairclough, R.: Web issue analysis : an integrated water resource management case study (2006) 0.02
    0.01642145 = product of:
      0.11495014 = sum of:
        0.11495014 = weight(_text_:interpretation in 5906) [ClassicSimilarity], result of:
          0.11495014 = score(doc=5906,freq=4.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.5370168 = fieldWeight in 5906, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=5906)
      0.14285715 = coord(1/7)
    
    Abstract
    In this article Web issue analysis is introduced as a new technique to investigate an issue as reflected on the Web. The issue chosen, integrated water resource management (IWRM), is a United Nations-initiated paradigm for managing water resources in an international context, particularly in developing nations. As with many international governmental initiatives, there is a considerable body of online information about it: 41.381 hypertext markup language (HTML) pages and 28.735 PDF documents mentioning the issue were downloaded. A page uniform resource locator (URL) and link analysis revealed the international and sectoral spread of IWRM. A noun and noun phrase occurrence analysis was used to identify the issues most commonly discussed, revealing some unexpected topics such as private sector and economic growth. Although the complexity of the methods required to produce meaningful statistics from the data is disadvantageous to easy interpretation, it was still possible to produce data that could be subject to a reasonably intuitive interpretation. Hence Web issue analysis is claimed to be a useful new technique for information science.
  6. Ma, Y.: Internet: the global flow of information (1995) 0.02
    0.0154822925 = product of:
      0.10837604 = sum of:
        0.10837604 = weight(_text_:interpretation in 4712) [ClassicSimilarity], result of:
          0.10837604 = score(doc=4712,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.5063043 = fieldWeight in 4712, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0625 = fieldNorm(doc=4712)
      0.14285715 = coord(1/7)
    
    Abstract
    Colours, icons, graphics, hypertext links and other multimedia elements are variables that affect information search strategies and information seeking behaviour. These variables are culturally constructed and represented and are subject to individual and community interpretation. Hypothesizes that users in different communities (in intercultural or multicultural context) will interpret differently the meanings of the multimedia objects on the Internet. Users' interpretations of multimedia objects may differ from the intentions of the designers. A study in this area is being undertaken
  7. Thelwall, M.: ¬A comparison of sources of links for academic Web impact factor calculations (2002) 0.01
    0.01161172 = product of:
      0.081282035 = sum of:
        0.081282035 = weight(_text_:interpretation in 4474) [ClassicSimilarity], result of:
          0.081282035 = score(doc=4474,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.37972826 = fieldWeight in 4474, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=4474)
      0.14285715 = coord(1/7)
    
    Abstract
    There has been much recent interest in extracting information from collections of Web links. One tool that has been used is Ingwersen's Web impact factor. It has been demonstrated that several versions of this metric can produce results that correlate with research ratings of British universities showing that, despite being a measure of a purely Internet phenomenon, the results are susceptible to a wider interpretation. This paper addresses the question of which is the best possible domain to count backlinks from, if research is the focus of interest. WIFs for British universities calculated from several different source domains are compared, primarily the .edu, .ac.uk and .uk domains, and the entire Web. The results show that all four areas produce WIFs that correlate strongly with research ratings, but that none produce incontestably superior figures. It was also found that the WIF was less able to differentiate in more homogeneous subsets of universities, although positive results are still possible.
  8. Oppenheim, C.; Selby, K.: Access to information on the World Wide Web for blind and visually impaired people (1999) 0.01
    0.01161172 = product of:
      0.081282035 = sum of:
        0.081282035 = weight(_text_:interpretation in 727) [ClassicSimilarity], result of:
          0.081282035 = score(doc=727,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.37972826 = fieldWeight in 727, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=727)
      0.14285715 = coord(1/7)
    
    Abstract
    The Internet gives access for blind and visually impaired users to previously unobtainable information via Braille or speech synthesis interpretation. This paper looks at how three search engines, AltaVista, Yahoo! and Infoseek presented their information to a small group of visually impaired and blind users and how accessible individual Internet pages are. Two participants had varying levels of partial sight and two Subjects were blind and solely reliant on speech synthesis output. Subjects were asked for feedback on interface design at various stages of their search and any problems they encountered were noted. The barriers to access that were found appear to come about by lack of knowledge and thought by the page designers themselves. An accessible page does not have to be dull. By adhering to simple guidelines, visually impaired users would be able to access information more effectively than would otherwise be possible. Visually disabled people would also have the same opportunity to access knowledge as their sighted colleagues.
  9. Bodoff, D.; Raban, D.: User models as revealed in web-based research services (2012) 0.01
    0.01161172 = product of:
      0.081282035 = sum of:
        0.081282035 = weight(_text_:interpretation in 76) [ClassicSimilarity], result of:
          0.081282035 = score(doc=76,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.37972826 = fieldWeight in 76, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=76)
      0.14285715 = coord(1/7)
    
    Abstract
    The user-centered approach to information retrieval emphasizes the importance of a user model in determining what information will be most useful to a particular user, given their context. Mediated search provides an opportunity to elaborate on this idea, as an intermediary's elicitations reveal what aspects of the user model they think are worth inquiring about. However, empirical evidence is divided over whether intermediaries actually work to develop a broadly conceived user model. Our research revisits the issue in a web research services setting, whose characteristics are expected to result in more thorough user modeling on the part of intermediaries. Our empirical study confirms that intermediaries engage in rich user modeling. While intermediaries behave differently across settings, our interpretation is that the underlying user model characteristics that intermediaries inquire about in our setting are applicable to other settings as well.
  10. Lucas, W.; Topi, H.: Form and function : the impact of query term and operator usage on Web search results (2002) 0.01
    0.009676432 = product of:
      0.067735024 = sum of:
        0.067735024 = weight(_text_:interpretation in 198) [ClassicSimilarity], result of:
          0.067735024 = score(doc=198,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 198, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=198)
      0.14285715 = coord(1/7)
    
    Abstract
    Conventional wisdom holds that queries to information retrieval systems will yield more relevant results if they contain multiple topic-related terms and use Boolean and phrase operators to enhance interpretation. Although studies have shown that the users of Web-based search engines typically enter short, term-based queries and rarely use search operators, little information exists concerning the effects of term and operator usage on the relevancy of search results. In this study, search engine users formulated queries on eight search topics. Each query was submitted to the user-specified search engine, and relevancy ratings for the retrieved pages were assigned. Expert-formulated queries were also submitted and provided a basis for comparing relevancy ratings across search engines. Data analysis based on our research model of the term and operator factors affecting relevancy was then conducted. The results show that the difference in the number of terms between expert and nonexpert searches, the percentage of matching terms between those searches, and the erroneous use of nonsupported operators in nonexpert searches explain most of the variation in the relevancy of search results. These findings highlight the need for designing search engine interfaces that provide greater support in the areas of term selection and operator usage
  11. Thelwall, M.; Vaughan, L.; Björneborn, L.: Webometrics (2004) 0.01
    0.009676432 = product of:
      0.067735024 = sum of:
        0.067735024 = weight(_text_:interpretation in 4279) [ClassicSimilarity], result of:
          0.067735024 = score(doc=4279,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 4279, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4279)
      0.14285715 = coord(1/7)
    
    Abstract
    Webometrics, the quantitative study of Web-related phenomena, emerged from the realization that methods originally designed for bibliometric analysis of scientific journal article citation patterns could be applied to the Web, with commercial search engines providing the raw data. Almind and Ingwersen (1997) defined the field and gave it its name. Other pioneers included Rodriguez Gairin (1997) and Aguillo (1998). Larson (1996) undertook exploratory link structure analysis, as did Rousseau (1997). Webometrics encompasses research from fields beyond information science such as communication studies, statistical physics, and computer science. In this review we concentrate on link analysis, but also cover other aspects of webometrics, including Web log fle analysis. One theme that runs through this chapter is the messiness of Web data and the need for data cleansing heuristics. The uncontrolled Web creates numerous problems in the interpretation of results, for instance, from the automatic creation or replication of links. The loose connection between top-level domain specifications (e.g., com, edu, and org) and their actual content is also a frustrating problem. For example, many .com sites contain noncommercial content, although com is ostensibly the main commercial top-level domain. Indeed, a skeptical researcher could claim that obstacles of this kind are so great that all Web analyses lack value. As will be seen, one response to this view, a view shared by critics of evaluative bibliometrics, is to demonstrate that Web data correlate significantly with some non-Web data in order to prove that the Web data are not wholly random. A practical response has been to develop increasingly sophisticated data cleansing techniques and multiple data analysis methods.
  12. Madden, A.D.; Ford, N.J.; Miller, D.; Levy, P.: Children's use of the internet for information-seeking : what strategies do they use, and what factors affect their performance? (2006) 0.01
    0.009676432 = product of:
      0.067735024 = sum of:
        0.067735024 = weight(_text_:interpretation in 615) [ClassicSimilarity], result of:
          0.067735024 = score(doc=615,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 615, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=615)
      0.14285715 = coord(1/7)
    
    Abstract
    Purpose - A common criticism of research into information seeking on the internet is that information seekers are restricted by the demands of the researcher. Another criticism is that the search topics, are often imposed by the researcher, and; particularly when working with children, domain knowledge could be as important as information-seeking skills. The research reported here attempts to address both these problems. Design/methodology/approach - A total of 15 children, aged 11 to 16, were each set three "think aloud" internet searches. In the first, they were asked to recall the last time they had sought information on the internet, and to repeat the search. For the second, they were given a word, asked to interpret it, then asked to search for their interpretation. For the third, they were asked to recall the last time they had been unsuccessful in a search, and to repeat the search. While performing each task, the children were encouraged to explain their actions. Findings - The paper finds that the factors that determined a child's ability to search successfully appeared to be: the amount of experience the child had of using the internet; the amount of guidance, both from adults and from peers; and the child's ability to explore the virtual environment, and to use the tools available for so doing. Originality/value - Many of the searches performed by participants in this paper were not related to schoolwork, and so some of the search approaches differed from those taught by teachers. Instead, they evolved through exploration and exchange of ideas. Further studies of this sort could provide insights of value to designers of web environments.
  13. Wijnhoven, F.; Brinkhuis, M.: Internet information triangulation : design theory and prototype evaluation (2015) 0.01
    0.009676432 = product of:
      0.067735024 = sum of:
        0.067735024 = weight(_text_:interpretation in 1724) [ClassicSimilarity], result of:
          0.067735024 = score(doc=1724,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 1724, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1724)
      0.14285715 = coord(1/7)
    
    Abstract
    Many discussions exist regarding the credibility of information on the Internet. Similar discussions happen on the interpretation of social scientific research data, for which information triangulation has been proposed as a useful method. In this article, we explore a design theory-consisting of a kernel theory, meta-requirements, and meta-designs-for software and services that triangulate Internet information. The kernel theory identifies 5 triangulation methods based on Churchman's inquiring systems theory and related meta-requirements. These meta-requirements are used to search for existing software and services that contain design features for Internet information triangulation tools. We discuss a prototyping study of the use of an information triangulator among 72 college students and how their use contributes to their opinion formation. From these findings, we conclude that triangulation tools can contribute to opinion formation by information consumers, especially when the tool is not a mere fact checker but includes the search and delivery of alternative views. Finally, we discuss other empirical propositions and design propositions for an agenda for triangulator developers and researchers. In particular, we propose investment in theory triangulation, that is, tools to automatically detect ethically and theoretically alternative information and views.
  14. Hunt, R.: Civilisation and its disconnects (2008) 0.01
    0.0077411463 = product of:
      0.05418802 = sum of:
        0.05418802 = weight(_text_:interpretation in 2568) [ClassicSimilarity], result of:
          0.05418802 = score(doc=2568,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.25315216 = fieldWeight in 2568, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.03125 = fieldNorm(doc=2568)
      0.14285715 = coord(1/7)
    
    Abstract
    Purpose - This paper aims to explore some initial and necessarily broad ideas about the effects of the world wide web on our methods of understanding and trusting, online and off. Design/methodology/approach - The paper considers the idea of trust via some of the revolutionary meanings inherent in the world wide web at its public conception in 1994, and some of its different meanings now. It does so in the context of the collaborative reader-writer Web2.0 (of today), and also through a brief exploration of our relationship to the grand narratives (and some histories) of the post-war West. It uses a variety of formal approaches taken from information science, literary criticism, philosophy, history, and journalism studies - together with some practical analysis based on 15 years as a web practitioner and content creator. It is a starting point. Findings - This paper suggests that a pronounced effect of the world wide web is the further atomising of many once-shared Western post-war narratives, and the global democratising of doubt as a powerful though not necessarily helpful epistemological tool. The world wide web is the place that most actively demonstrates contemporary doubt. Research limitations/implications - This is the starting place for a piece of larger cross-faculty (and cross-platform) research into the arena of trust and doubt. In particular, the relationship of concepts such as news, event, history and myth with the myriad content platforms of new media, the idea of the digital consumer, and the impact of geography on knowledge that is enshrined in the virtual. This paper attempts to frame a few of the initial issues inherent in the idea of "trust" in the digital age and argues that without some kind of shared aesthetics of narrative judgment brought about through a far broader public understanding of (rather than an interpretation of) oral, visual, literary and multi-media narratives, stories and plots, we cannot be said to trust many types of knowledge - not just in philosophical terms but also in our daily actions and behaviours. Originality/value - This paper initiates debate about whether the creation of a new academic "space" in which cross-faculty collaborations into the nature of modern narrative (in terms of production and consumption; producers and consumers) might be able to help us to understand more of the social implications of the collaborative content produced for consumption on the world wide web.
  15. Nanfito, N.: ¬The indexed Web : engineering tools for cataloging, storing and delivering Web based documents (1999) 0.01
    0.007160034 = product of:
      0.050120234 = sum of:
        0.050120234 = product of:
          0.10024047 = sum of:
            0.10024047 = weight(_text_:22 in 8727) [ClassicSimilarity], result of:
              0.10024047 = score(doc=8727,freq=4.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.76602525 = fieldWeight in 8727, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=8727)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    5. 8.2001 12:22:47
    Source
    Information outlook. 3(1999) no.2, S.18-22
  16. Wilson, D.N.: Citing electronic sites (1996) 0.01
    0.005786181 = product of:
      0.040503263 = sum of:
        0.040503263 = product of:
          0.08100653 = sum of:
            0.08100653 = weight(_text_:22 in 4514) [ClassicSimilarity], result of:
              0.08100653 = score(doc=4514,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.61904186 = fieldWeight in 4514, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=4514)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Audiovisual librarian. 22(1996) no.2, S.108-110
  17. Notess, G.R.: ¬The internet (1997) 0.01
    0.005786181 = product of:
      0.040503263 = sum of:
        0.040503263 = product of:
          0.08100653 = sum of:
            0.08100653 = weight(_text_:22 in 7783) [ClassicSimilarity], result of:
              0.08100653 = score(doc=7783,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.61904186 = fieldWeight in 7783, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=7783)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Encyclopedia of library and information science. Vol.59, [=Suppl.22]
  18. Ghilardi, F.J.M.: ¬The information center of the future : the professional's role (1994) 0.01
    0.005786181 = product of:
      0.040503263 = sum of:
        0.040503263 = product of:
          0.08100653 = sum of:
            0.08100653 = weight(_text_:22 in 2504) [ClassicSimilarity], result of:
              0.08100653 = score(doc=2504,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.61904186 = fieldWeight in 2504, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=2504)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    27.12.2015 18:22:38
  19. Rowley, J.: Current awareness in an electronic age (1998) 0.01
    0.0051143095 = product of:
      0.035800166 = sum of:
        0.035800166 = product of:
          0.07160033 = sum of:
            0.07160033 = weight(_text_:22 in 183) [ClassicSimilarity], result of:
              0.07160033 = score(doc=183,freq=4.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.54716086 = fieldWeight in 183, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=183)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 2.1999 17:50:37
    Source
    Online and CD-ROM review. 22(1998) no.4, S.277-279
  20. Orenstein, R.M.: Fulltext sources online (1997) 0.01
    0.005062908 = product of:
      0.035440356 = sum of:
        0.035440356 = product of:
          0.07088071 = sum of:
            0.07088071 = weight(_text_:22 in 2677) [ClassicSimilarity], result of:
              0.07088071 = score(doc=2677,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.5416616 = fieldWeight in 2677, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2677)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Footnote
    Rez. in: Online 22(1998) no.1, S.93-94 (J. Alita)

Types

  • a 188
  • m 20
  • s 13
  • r 2
  • el 1
  • More… Less…

Subjects

Classifications