Search (1818 results, page 91 of 91)

  • × theme_ss:"Internet"
  • × year_i:[1990 TO 2000}
  1. Delozier, E.P.: Identifying and documenting objects and services on the Internet : the Uniform Resource Locator (1996) 0.00
    2.0916048E-4 = product of:
      0.0033465677 = sum of:
        0.0033465677 = weight(_text_:in in 7271) [ClassicSimilarity], result of:
          0.0033465677 = score(doc=7271,freq=2.0), product of:
            0.037112754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.027283683 = queryNorm
            0.09017298 = fieldWeight in 7271, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=7271)
      0.0625 = coord(1/16)
    
    Abstract
    Discusses the role of the URL as a menas od uniquely identifying an item of information on the WWW in the context of traditional methods. Includes: standard bibliographic description; LoC card number; ISBN, ISSN; MEDLINE Unique Identifier and OCLC Control Number. Presents the general URL model and the basic structure of URL codes. Discusses specific URL structures: file related URLs (file and ftp); WWW URLs (http); Gopher URLs (gopher); electronic mail URLs (mailto); Usenet newsgroups URLs (news); and remote login URLs (telnet and tn3270). Notes other proposals for identifying Internet resources and services that often become misinterpreted as URLs and lists some of the characters which may not be used within a URL. Although the URL is an official standard for referencing WWW resources, it is not yet recognized as a universal citation model for Internet resources
  2. Meldelzon, A.O.; Mihaila, G.A.; Milo, T.: Querying the World Wide Web (1997) 0.00
    2.0916048E-4 = product of:
      0.0033465677 = sum of:
        0.0033465677 = weight(_text_:in in 7860) [ClassicSimilarity], result of:
          0.0033465677 = score(doc=7860,freq=2.0), product of:
            0.037112754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.027283683 = queryNorm
            0.09017298 = fieldWeight in 7860, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=7860)
      0.0625 = coord(1/16)
    
    Abstract
    The WWW is a large, heterogeneous, distributed collection of documents connected by hypertext links. The most common technology currently used for searching the Web depends on sending information retrieval requests to 'index servers' that index as many documents as they can find ba navigating the network. One problem with this is that users must be aware of the various index servers (over a dozen of them are currently deployed on the Web), of their strengths and weaknesses, and of the pecularities of their query interfaces. A more serious problem is that these queries cannot exploit the structure and topology of the document network. In this paper we propose a query language, WebSQL, that takes advantage of multiple index servers without requiring users to know about them, and that integrates textual retrieval with strucutre and topology-based queries
  3. Uhlinger, E.S.; Heinrich, P.L.: Converting databases to searchable formats for the World Wide Web (1997) 0.00
    2.0916048E-4 = product of:
      0.0033465677 = sum of:
        0.0033465677 = weight(_text_:in in 167) [ClassicSimilarity], result of:
          0.0033465677 = score(doc=167,freq=2.0), product of:
            0.037112754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.027283683 = queryNorm
            0.09017298 = fieldWeight in 167, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=167)
      0.0625 = coord(1/16)
    
    Abstract
    Describes the project, undertaken at the Cadet Hand Library (CHL), Bodega Marine Laboratory (BML), California University at Davis to convert the CHL Library and other BML catalogues into a format searchable via WWW. Focuses on the use of Best WebPro software, designed to create Web searchable versions of local databases, to create a WWW searchable version of the catalogues. Describes the setting up of the Best WebPro databases, their specific searching capabilities, and details of the 6 specific databases converted (Catalogue, Students reports, GBALC serials, Publications, Telephone directory, Tide tables) with data (in MBytes) for: original file size; Best WebPro file size; Index size and number of indexes; and total size
  4. MacDougall, S.: Rethinking indexing : the impact of the Internet (1996) 0.00
    2.0916048E-4 = product of:
      0.0033465677 = sum of:
        0.0033465677 = weight(_text_:in in 704) [ClassicSimilarity], result of:
          0.0033465677 = score(doc=704,freq=2.0), product of:
            0.037112754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.027283683 = queryNorm
            0.09017298 = fieldWeight in 704, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=704)
      0.0625 = coord(1/16)
    
    Abstract
    Considers the challenge to professional indexers posed by the Internet. Indexing and searching on the Internet appears to have a retrograde step, as well developed and efficient information retrieval techniques have been replaced by cruder techniques, involving automatic keyword indexing and frequency ranking, leading to large retrieval sets and low precision. This is made worse by the apparent acceptance of this poor perfromance by Internet users and the feeling, on the part of indexers, that they are being bypassed by the producers of these hyperlinked menus and search engines. Key issues are: how far 'human' indexing will still be required in the Internet environment; how indexing techniques will have to change to stay relevant; and the future role of indexers. The challenge facing indexers is to adapt their skills to suit the online environment and to convince publishers of the need for efficient indexes on the Internet
  5. Balas, J.: Virtual support for the virtual librarian (1998) 0.00
    2.0916048E-4 = product of:
      0.0033465677 = sum of:
        0.0033465677 = weight(_text_:in in 1822) [ClassicSimilarity], result of:
          0.0033465677 = score(doc=1822,freq=2.0), product of:
            0.037112754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.027283683 = queryNorm
            0.09017298 = fieldWeight in 1822, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=1822)
      0.0625 = coord(1/16)
    
    Source
    Computers in libraries. 18(1998) no.1, S.40-42
  6. Haas, S.: Metadata mania : an overview (1998) 0.00
    2.0916048E-4 = product of:
      0.0033465677 = sum of:
        0.0033465677 = weight(_text_:in in 2222) [ClassicSimilarity], result of:
          0.0033465677 = score(doc=2222,freq=2.0), product of:
            0.037112754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.027283683 = queryNorm
            0.09017298 = fieldWeight in 2222, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2222)
      0.0625 = coord(1/16)
    
    Abstract
    Describes the structure of metadata formats with particular reference to the taxonomy of data formats set out by the BIBLINK report of the UK Office for Library and Information Networking and based on their underlying complexity. Referes to 3 main types of metadata: Dublin Core; MARC and Federal Geographic Data Committee (FGDC). Provides practical examples of the actual codings used, illustrated with reference to the Dublin Core, Marc and FGDC elements in selected Web sites. Ends with a glossary and a list of Web sites containing background information on metadata, such as the IAMSLIC metadata homepage
  7. McMurdo, G.: How the Internet was indexed (1995) 0.00
    2.0916048E-4 = product of:
      0.0033465677 = sum of:
        0.0033465677 = weight(_text_:in in 2411) [ClassicSimilarity], result of:
          0.0033465677 = score(doc=2411,freq=2.0), product of:
            0.037112754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.027283683 = queryNorm
            0.09017298 = fieldWeight in 2411, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2411)
      0.0625 = coord(1/16)
    
    Abstract
    The scope and characteristics of what may be considered the first three generations of automated Internet indexing systems are identified and described as to their methods of compiling their datasets, their search interfaces and the associated etymological metaphors and mythologies. These three are suggested to be: firstly, the Archie system for single keyword and regular expression searches of the file lists of anonymous ftp sites: secondly, the Veronica system for Boolean keyword in title searches of the world's gopher servers; thirdly, a range of software techniques jnown as robots and search engines, which compile searchable databases of information accessible via the WWW, such as the currently popular Lycos project at Carnegie Mellon University. The present dominance of WWW client software as the preferred interface to Internet information has led to provision of methods of also using the first two systems by this single interface, and these are also noted
  8. Stephens, D.: Managing the Web-enhanced geographic information service (1997) 0.00
    2.0916048E-4 = product of:
      0.0033465677 = sum of:
        0.0033465677 = weight(_text_:in in 2719) [ClassicSimilarity], result of:
          0.0033465677 = score(doc=2719,freq=2.0), product of:
            0.037112754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.027283683 = queryNorm
            0.09017298 = fieldWeight in 2719, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2719)
      0.0625 = coord(1/16)
    
    Abstract
    A number of map libraries have come to view the WWW as a mechanism for enhancing existing services. Examines key management issues involved in delivering geographic information and services on the Web with reference to the Web-enhanced service provided by the University of Virginia Library's Geographic Information Center (GIC). The integration of GIC's Web delivery into daily geographic information service was guided by 4 factors: a defined clientele, articulated scope of services, direct emphasis of the site, and technical support. Discusses management issues concerning building appropriate collections for Web delivery; managing access to applications and tools; and evaluating use of the site. A focus on meeting the needs of primary users will be consistent with the mission and goals of the academic library
  9. Cousins, S.A.: COPAC: the new national OPAC service based on the CURL database (1997) 0.00
    2.0916048E-4 = product of:
      0.0033465677 = sum of:
        0.0033465677 = weight(_text_:in in 2834) [ClassicSimilarity], result of:
          0.0033465677 = score(doc=2834,freq=2.0), product of:
            0.037112754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.027283683 = queryNorm
            0.09017298 = fieldWeight in 2834, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2834)
      0.0625 = coord(1/16)
    
    Footnote
    Vgl. auch Mowat, I.R.M. in: Online and CD-ROM review 20(1996) no.4
  10. Fidel, R.; Davies, R.K.; Douglass, M.H.; Holder, J.K.; Hopkins, C.J.; Kushner, E.J.; Miyagishimas, B.K.; Toney, C.D.: ¬A visit to the information mall : Web searching behavior of high school students (1999) 0.00
    2.0916048E-4 = product of:
      0.0033465677 = sum of:
        0.0033465677 = weight(_text_:in in 2949) [ClassicSimilarity], result of:
          0.0033465677 = score(doc=2949,freq=2.0), product of:
            0.037112754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.027283683 = queryNorm
            0.09017298 = fieldWeight in 2949, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2949)
      0.0625 = coord(1/16)
    
    Abstract
    This article analyzes Web searching behavior for homework assignments of high school students through field observations in class and the terminal with students thinking alound, and through interviews with various participants, including the teacher and librarian. Students performed focused searching and progressed through a search swiftly and flexibly. They used landmarks and assumed that one can always start a new search and ask for help. They were satisfied with their searches and the results, but impatient with slow response. The students enjoyed searching the Web because it had a variety of formats, it showed pictures, it covered a multitude of subjects and it provided easy access to information. Difficulties and problems students encountered emphasize the need for training to all involved, and for a system design that is based on user seeking and searching behavior
  11. Holmes, S.F.: Reaching the whole community through the Internet (1998) 0.00
    2.0916048E-4 = product of:
      0.0033465677 = sum of:
        0.0033465677 = weight(_text_:in in 3363) [ClassicSimilarity], result of:
          0.0033465677 = score(doc=3363,freq=2.0), product of:
            0.037112754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.027283683 = queryNorm
            0.09017298 = fieldWeight in 3363, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3363)
      0.0625 = coord(1/16)
    
    Source
    Computers in libraries. 18(1998) no.4, S.51-55
  12. Weiss, S.C.: ¬The seamless, Web-based library : a meta site for the 21st century (1999) 0.00
    2.0916048E-4 = product of:
      0.0033465677 = sum of:
        0.0033465677 = weight(_text_:in in 6542) [ClassicSimilarity], result of:
          0.0033465677 = score(doc=6542,freq=2.0), product of:
            0.037112754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.027283683 = queryNorm
            0.09017298 = fieldWeight in 6542, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=6542)
      0.0625 = coord(1/16)
    
    Abstract
    Taking a step beyond Meta search engines which require Web site evaluation skills and a knowledge of how to construct effective search statements, we encounter the concept of a seamless, Web-based library. These are electronic libraries created by information professionals, Meta sites for the 21st Century. Here is a place where average people with average Internet skills can find significant Web sites arranged under a hierarchy of subject categories. Having observed client behavior in a university library setting for a quarter of a century, it is apparent that the extent to which information is used has always been determined by content applicable to user needs, an easy-to-understand design, and high visibility. These same elements have determined the extent to which Internet Quick Reference (IQR), a seamless, Web-based library at cc.usu.edu/-stewei/hot.htm. has been used
  13. Hochheiser, H.; Shneiderman, B.: Understanding patterns of user visits to Web sites : Interactive Starfield visualizations of WWW log data (1999) 0.00
    2.0916048E-4 = product of:
      0.0033465677 = sum of:
        0.0033465677 = weight(_text_:in in 6713) [ClassicSimilarity], result of:
          0.0033465677 = score(doc=6713,freq=2.0), product of:
            0.037112754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.027283683 = queryNorm
            0.09017298 = fieldWeight in 6713, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=6713)
      0.0625 = coord(1/16)
    
    Abstract
    HTTP server log files provide Web site operators with substantial detail regarding the visitors to their sites. Interest in interpreting this data has spawned an active market for software packages that summarize and analyze this data, providing histograms, pie graphs, and other charts summarizing usage patterns. While useful, these summaries obscure useful information and restrict users to passive interpretation of static displays. Interactive starfield visualizations can be used to provide users with greater abilities to interpret and explore web log data. By combining two-dimensional displays of thousands of individual access requests, color and size coding for additional attributes, and facilities for zooming and filtering, these visualizations provide capabilities for examining data that exceed those of traditional web log analysis tools. We introduce a series of interactive starfield visualizations, which can be used to explore server data across various dimensions. Possible uses of these visualizations are discussed, and difficulties of data collection, presentation, and interpretation are explored
  14. Koch, T.; Ardö, A.; Noodén, L.: ¬The construction of a robot-generated subject index : DESIRE II D3.6a, Working Paper 1 (1999) 0.00
    2.0916048E-4 = product of:
      0.0033465677 = sum of:
        0.0033465677 = weight(_text_:in in 1668) [ClassicSimilarity], result of:
          0.0033465677 = score(doc=1668,freq=2.0), product of:
            0.037112754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.027283683 = queryNorm
            0.09017298 = fieldWeight in 1668, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=1668)
      0.0625 = coord(1/16)
    
    Abstract
    This working paper describes the creation of a test database to carry out the automatic classification tasks of the DESIRE II work package D3.6a on. It is an improved version of NetLab's existing "All" Engineering database created after a comparative study of the outcome of two different approaches to collecting the documents. These two methods were selected from seven different general methodologies to build robot-generated subject indices, presented in this paper. We found a surprisingly low overlap between the Engineering link collections we used as seed pages for the robot and subsequently an even more surprisingly low overlap between the resources collected by the two different approaches. That inspite of using basically the same services to start the harvesting process from. A intellectual evaluation of the contents of both databases showed almost exactly the same percentage of relevant documents (77%), indicating that the main difference between those aproaches was the coverage of the resulting database.
  15. Clower, T.: ¬A review of regulatory barriers : is the information superhighway really imminent? (1994) 0.00
    1.7430041E-4 = product of:
      0.0027888066 = sum of:
        0.0027888066 = weight(_text_:in in 53) [ClassicSimilarity], result of:
          0.0027888066 = score(doc=53,freq=2.0), product of:
            0.037112754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.027283683 = queryNorm
            0.07514416 = fieldWeight in 53, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=53)
      0.0625 = coord(1/16)
    
    Abstract
    There is much excitement in the information sciences about the imminent deployment of a nationwide information superhighway. However, there remain many obstacles to the development of the infrastructure to support the superhighway, not the least of which is the hodgepodge of regulations administered by state utility commissions and other regulatory agencies. This paper compares state communications regulatory policies and their potential impacts on the development of the physical infrastructure to support an information superhighway. The paper also examines the possibility of federal intervention into state policymaking where there is resistance to formulating policies consistent with the federal administration's goal of information infrastructure development. Finally, using government and private indistry data, an estimation of the direct impacts of infrastructure construction to support a nationwide information superhighway is calculated including direct spending, the creation of construction-related jobs and other potential social impacts. The findings indicate that state regulatory policies will have an impact on the speed and cost of infrastructure development. However, court rulings have limited the likelihood of federal intervention into state agencies who remain intractable. For the United States, the construction of a fiber optic telecommunications network will represent direct spending of more than $400 billion and create more than one million direct and indirect temporary jobs. The paper concludes with a call for additional study on the social and economic impacts of a telecommunications superhighway
  16. Scull, C.; Milewski, A.; Millen, D.: Envisioning the Web : user expectations about the cyber-experience (1999) 0.00
    1.7430041E-4 = product of:
      0.0027888066 = sum of:
        0.0027888066 = weight(_text_:in in 6539) [ClassicSimilarity], result of:
          0.0027888066 = score(doc=6539,freq=2.0), product of:
            0.037112754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.027283683 = queryNorm
            0.07514416 = fieldWeight in 6539, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6539)
      0.0625 = coord(1/16)
    
    Abstract
    An exploratory research project was undertaken to understand how novice college students and Web savvy librarians initially envisioned the Internet and how these representations changed over time and with experience. Users' representation of the Internet typically contained few meaningful reference points excepting "landmarks" such as search sites and frequently visited sites. For many of the users, the representation was largely procedural, and therefore organized primarily by time. All novice users conceptualized search engines as literally searching the entire Internet when a query was issued. Web savvy librarians understood the limitations of search engines better, but did still expect search engines to follow familiar organizational schemes and to indicate their cataloguing system. Although all users initially approached the Internet with high expectations of information credibility, expert users learned early on that "anyone can publish." In response to the lack of clear credibility conventions, librarians applied the same criteria they used with traditional sources. However, novice users retained high credibility expectations because their exposure was limited to the subscription-based services within their college library. And finally, during an assigned search task new users expected "step by step" instructions and self-evident cues to interaction. They were also overwhelmed and confused by the amount of information "help" displayed and became impatient when a context appropriate solution to their problem was not immediately offered
  17. Williams, P.; Nicholas, D.: ¬The migration of news to the web (1999) 0.00
    1.7430041E-4 = product of:
      0.0027888066 = sum of:
        0.0027888066 = weight(_text_:in in 735) [ClassicSimilarity], result of:
          0.0027888066 = score(doc=735,freq=2.0), product of:
            0.037112754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.027283683 = queryNorm
            0.07514416 = fieldWeight in 735, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=735)
      0.0625 = coord(1/16)
    
    Abstract
    Virtually all UK and US newspapers and the vast majority of regional and even local titles are now represented on the web. Indeed, the Yahoo news and media directory lists no less than 114 UK newspapers online (as of November 1998). Broadcasters from the BBC and Sky downwards, and all the famous news agencies (Press Association, Reuters etc.) also boast comprehensive Internet services. With such an array of sources available, the future of mass access to the Internet, possibly via TV terminals, suggests that more and more people may soon opt for this medium to receive the bulk of their news information. This paper gives an overview of the characteristics of the medium illustrated with examples of how these are being used to both facilitate and enhance the content and dissemination of the news product. These characteristics include hyperlinking to external information sources, providing archive access to past reports, reader interactivity and other features not possible to incorporate into more passive media such as the hardcopy newspaper. From a survey of UK and US news providers it is clear that American newspapers are exploiting the advantages of web information dissemination to a far greater extent than their British counterparts, with the notable exception of The Electronic Telegraph. UK broadcasters, however, generally appear to have adapted better to the new medium, with the BBC rivaling CNN in its depth and extent of news coverage, use of links and other elements.
  18. Eddings, J.: How the Internet works (1994) 0.00
    1.7430041E-4 = product of:
      0.0027888066 = sum of:
        0.0027888066 = weight(_text_:in in 1514) [ClassicSimilarity], result of:
          0.0027888066 = score(doc=1514,freq=2.0), product of:
            0.037112754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.027283683 = queryNorm
            0.07514416 = fieldWeight in 1514, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1514)
      0.0625 = coord(1/16)
    
    Abstract
    How the Internet Works promises "an exciting visual journey down the highways and byways of the Internet," and it delivers. The book's high quality graphics and simple, succinct text make it the ideal book for beginners; however it still has much to offer for Net vets. This book is jam- packed with cool ways to visualize how the Net works. The first section visually explores how TCP/IP, Winsock, and other Net connectivity mysteries work. This section also helps you understand how e-mail addresses and domains work, what file types mean, and how information travels across the Net. Part 2 unravels the Net's underlying architecture, including good information on how routers work and what is meant by client/server architecture. The third section covers your own connection to the Net through an Internet Service Provider (ISP), and how ISDN, cable modems, and Web TV work. Part 4 discusses e-mail, spam, newsgroups, Internet Relay Chat (IRC), and Net phone calls. In part 5, you'll find out how other Net tools, such as gopher, telnet, WAIS, and FTP, can enhance your Net experience. The sixth section takes on the World Wide Web, including everything from how HTML works to image maps and forms. Part 7 looks at other Web features such as push technology, Java, ActiveX, and CGI scripting, while part 8 deals with multimedia on the Net. Part 9 shows you what intranets are and covers groupware, and shopping and searching the Net. The book wraps up with part 10, a chapter on Net security that covers firewalls, viruses, cookies, and other Web tracking devices, plus cryptography and parental controls.

Languages

Types

  • a 1461
  • m 224
  • s 83
  • el 25
  • x 22
  • r 16
  • i 13
  • b 6
  • ? 2
  • h 2
  • l 1
  • More… Less…

Subjects

Classifications