Search (412 results, page 1 of 21)

  • × theme_ss:"Internet"
  • × type_ss:"a"
  1. Capps, M.; Ladd, B.; Stotts, D.: Enhanced graph models in the Web : multi-client, multi-head, multi-tail browsing (1996) 0.03
    0.032156922 = product of:
      0.112549216 = sum of:
        0.09482904 = weight(_text_:interpretation in 5860) [ClassicSimilarity], result of:
          0.09482904 = score(doc=5860,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.4430163 = fieldWeight in 5860, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5860)
        0.017720178 = product of:
          0.035440356 = sum of:
            0.035440356 = weight(_text_:22 in 5860) [ClassicSimilarity], result of:
              0.035440356 = score(doc=5860,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.2708308 = fieldWeight in 5860, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5860)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Richer graph models permit authors to 'program' the browsing behaviour they want WWW readers to see by turning the hypertext into a hyperprogram with specific semantics. Multiple browsing streams can be started under the author's control and then kept in step through the synchronization mechanisms provided by the graph model. Adds a Semantic Web Graph Layer (SWGL) which allows dynamic interpretation of link and node structures according to graph models. Details the SWGL and its architecture, some sample protocol implementations, and the latest extensions to MHTML
    Date
    1. 8.1996 22:08:06
  2. Kaeser, E.: ¬Das postfaktische Zeitalter (2016) 0.02
    0.019490037 = product of:
      0.068215124 = sum of:
        0.05747507 = weight(_text_:interpretation in 3080) [ClassicSimilarity], result of:
          0.05747507 = score(doc=3080,freq=4.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.2685084 = fieldWeight in 3080, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3080)
        0.01074005 = product of:
          0.0214801 = sum of:
            0.0214801 = weight(_text_:22 in 3080) [ClassicSimilarity], result of:
              0.0214801 = score(doc=3080,freq=4.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.16414827 = fieldWeight in 3080, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3080)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Content
    "Es gibt Daten, Informationen und Fakten. Wenn man mir eine Zahlenreihe vorsetzt, dann handelt es sich um Daten: unterscheidbare Einheiten, im Fachjargon: Items. Wenn man mir sagt, dass diese Items stündliche Temperaturangaben der Aare im Berner Marzilibad bedeuten, dann verfüge ich über Information - über interpretierte Daten. Wenn man mir sagt, dies seien die gemessenen Aaretemperaturen am 22. August 2016 im Marzili, dann ist das ein Faktum: empirisch geprüfte interpretierte Daten. Dieser Dreischritt - Unterscheiden, Interpretieren, Prüfen - bildet quasi das Bindemittel des Faktischen, «the matter of fact». Wir alle führen den Dreischritt ständig aus und gelangen so zu einem relativ verlässlichen Wissen und Urteilsvermögen betreffend die Dinge des Alltags. Aber wie schon die Kurzcharakterisierung durchblicken lässt, bilden Fakten nicht den Felsengrund der Realität. Sie sind kritikanfällig, sowohl von der Interpretation wie auch von der Prüfung her gesehen. Um bei unserem Beispiel zu bleiben: Es kann durchaus sein, dass man uns zwei unterschiedliche «faktische» Temperaturverläufe der Aare am 22. August 2016 vorsetzt.
    - Das Amen des postmodernen Denkens Was nun? Wir führen den Unterschied zum Beispiel auf Ablesefehler (also auf falsche Interpretation) zurück oder aber auf verschiedene Messmethoden. Sofort ist ein Deutungsspielraum offen. Nietzsches berühmtes Wort hallt wider, dass es nur Interpretationen, keine Fakten gebe. Oder wie es im Englischen heisst: «Facts are factitious» - Fakten sind Artefakte, sie sind künstlich. Diese Ansicht ist quasi das Amen des postmodernen Denkens. Und als besonders tückisch an ihr entpuppt sich ihre Halbwahrheit. Es stimmt, dass Fakten oft das Ergebnis eines langwierigen Erkenntnisprozesses sind, vor allem heute, wo wir es immer mehr mit Aussagen über komplexe Systeme wie Migrationsdynamik, Meteorologie oder Märkte zu tun bekommen. Der Interpretationsdissens unter Experten ist ja schon fast sprichwörtlich.
  3. Moll, S.: ¬Der Urknall des Internets : 20 Jahre WWW (2011) 0.02
    0.01785056 = product of:
      0.12495391 = sum of:
        0.12495391 = weight(_text_:quantenphysik in 3720) [ClassicSimilarity], result of:
          0.12495391 = score(doc=3720,freq=2.0), product of:
            0.34748885 = queryWeight, product of:
              9.298992 = idf(docFreq=10, maxDocs=44218)
              0.037368443 = queryNorm
            0.35959113 = fieldWeight in 3720, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              9.298992 = idf(docFreq=10, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3720)
      0.14285715 = coord(1/7)
    
    Content
    "Alle großen Erfindungen der Menschheitsgeschichte haben einen Entstehungsmythos. Einsteins Trambahnfahrt durch Zürich beispielsweise oder der berühmte Apfel, der Newton angeblich auf den Kopf gefallen ist. Als Tim Berners-Lee, damals Physikstudent in Manchester, Mitte der 70er Jahre mit seinem Vater in einem Stadtpark unter einem Baum saß, unterhielten sich die beiden darüber, dass sie doch in ihrem Garten auch einen solchen Baum gebrauchen könnten. Der Vater, ein Mathematiker, der an einem der ersten kommerziell genutzten Computer der Welt arbeitete, bemerkte, dass die Fähigkeit, die abstrakte Idee eines schattigen Baumes auf einen anderen Ort zu übertragen, doch eine einmalig menschliche sei. Computer könnten so etwas nicht. Das Problem ließ Berners-Lee nicht los. Deshalb suchte er, während er in den 80er Jahren als Berater am europäischen Labor für Quantenphysik (CERN) in der Schweiz arbeitete, noch immer nach einem Weg, seinem Computer beizubringen, Verbindungen zwischen den disparaten Dokumenten und Notizen auf seiner Festplatte herzustellen. Er entwarf deshalb ein System, das heute so alltäglich ist, wie Kleingeld. Lee stellte eine direkte Verknüpfung her zwischen Wörtern und Begriffen in Dokumenten und den gleichen Begriffen in anderen Dokumenten: Der Link war geboren.
  4. Access to electronic information, services and networks : an interpretation of the LIBRARY BILL OF RIGHTS (1995) 0.02
    0.01642145 = product of:
      0.11495014 = sum of:
        0.11495014 = weight(_text_:interpretation in 4713) [ClassicSimilarity], result of:
          0.11495014 = score(doc=4713,freq=4.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.5370168 = fieldWeight in 4713, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=4713)
      0.14285715 = coord(1/7)
    
    Abstract
    At the 1996 Midwinter Meeting of the 57.000-member ALA in San Antonio, ALA affirms user rights in cyberspace; and calls on the US Congress to protect public access to information during the shift from print to electronic publishing. The latest ALA News over the net reported what Betty J. Turock, president of the ALA said, 'free access to information is essential to a democracy. Our concern as professional librarians is that new technology not become a barrier for members of the public.' The new 'Access to Electronic Information, Services and Network: an interpretation of the Library Bill of Rights' was adopted by the ALA Council at the Midwinter Meeting, and will have profound implications and use for many libraries and librarians in the months to come. Because of its significance and potential impact, the next of this document has been downloaded from the ALA's Web site at http://www.ala.org to facilitate the use by readers of this journal
  5. Hochheiser, H.; Shneiderman, B.: Using interactive visualizations of WWW log data to characterize access patterns and inform site design (2001) 0.02
    0.01642145 = product of:
      0.11495014 = sum of:
        0.11495014 = weight(_text_:interpretation in 5765) [ClassicSimilarity], result of:
          0.11495014 = score(doc=5765,freq=4.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.5370168 = fieldWeight in 5765, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=5765)
      0.14285715 = coord(1/7)
    
    Abstract
    HTTP server log files provide Web site operators with substantial detail regarding the visitors to their sites. Interest in interpreting this data has spawned an active market for software packages that summarize and analyze this data, providing histograms, pie graphs, and other charts summarizing usage patterns. Although useful, these summaries obscure useful information and restrict users to passive interpretation of static displays. Interactive visualizations can be used to provide users with greater abilities to interpret and explore Web log data. By combining two-dimensional displays of thousands of individual access requests, color, and size coding for additional attributes, and facilities for zooming and filtering, these visualizations provide capabilities for examining data that exceed those of traditional Web log analysis tools. We introduce a series of interactive visualizations that can be used to explore server data across various dimensions. Possible uses of these visualizations are discussed, and difficulties of data collection, presentation, and interpretation are explored
  6. Hochheiser, H.; Shneiderman, B.: Understanding patterns of user visits to Web sites : Interactive Starfield visualizations of WWW log data (1999) 0.02
    0.01642145 = product of:
      0.11495014 = sum of:
        0.11495014 = weight(_text_:interpretation in 6713) [ClassicSimilarity], result of:
          0.11495014 = score(doc=6713,freq=4.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.5370168 = fieldWeight in 6713, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=6713)
      0.14285715 = coord(1/7)
    
    Abstract
    HTTP server log files provide Web site operators with substantial detail regarding the visitors to their sites. Interest in interpreting this data has spawned an active market for software packages that summarize and analyze this data, providing histograms, pie graphs, and other charts summarizing usage patterns. While useful, these summaries obscure useful information and restrict users to passive interpretation of static displays. Interactive starfield visualizations can be used to provide users with greater abilities to interpret and explore web log data. By combining two-dimensional displays of thousands of individual access requests, color and size coding for additional attributes, and facilities for zooming and filtering, these visualizations provide capabilities for examining data that exceed those of traditional web log analysis tools. We introduce a series of interactive starfield visualizations, which can be used to explore server data across various dimensions. Possible uses of these visualizations are discussed, and difficulties of data collection, presentation, and interpretation are explored
  7. Thelwall, M.; Vann, K.; Fairclough, R.: Web issue analysis : an integrated water resource management case study (2006) 0.02
    0.01642145 = product of:
      0.11495014 = sum of:
        0.11495014 = weight(_text_:interpretation in 5906) [ClassicSimilarity], result of:
          0.11495014 = score(doc=5906,freq=4.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.5370168 = fieldWeight in 5906, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=5906)
      0.14285715 = coord(1/7)
    
    Abstract
    In this article Web issue analysis is introduced as a new technique to investigate an issue as reflected on the Web. The issue chosen, integrated water resource management (IWRM), is a United Nations-initiated paradigm for managing water resources in an international context, particularly in developing nations. As with many international governmental initiatives, there is a considerable body of online information about it: 41.381 hypertext markup language (HTML) pages and 28.735 PDF documents mentioning the issue were downloaded. A page uniform resource locator (URL) and link analysis revealed the international and sectoral spread of IWRM. A noun and noun phrase occurrence analysis was used to identify the issues most commonly discussed, revealing some unexpected topics such as private sector and economic growth. Although the complexity of the methods required to produce meaningful statistics from the data is disadvantageous to easy interpretation, it was still possible to produce data that could be subject to a reasonably intuitive interpretation. Hence Web issue analysis is claimed to be a useful new technique for information science.
  8. Ma, Y.: Internet: the global flow of information (1995) 0.02
    0.0154822925 = product of:
      0.10837604 = sum of:
        0.10837604 = weight(_text_:interpretation in 4712) [ClassicSimilarity], result of:
          0.10837604 = score(doc=4712,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.5063043 = fieldWeight in 4712, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0625 = fieldNorm(doc=4712)
      0.14285715 = coord(1/7)
    
    Abstract
    Colours, icons, graphics, hypertext links and other multimedia elements are variables that affect information search strategies and information seeking behaviour. These variables are culturally constructed and represented and are subject to individual and community interpretation. Hypothesizes that users in different communities (in intercultural or multicultural context) will interpret differently the meanings of the multimedia objects on the Internet. Users' interpretations of multimedia objects may differ from the intentions of the designers. A study in this area is being undertaken
  9. Court, J.; Lovis, G.; Fassbind-Eigenheer, R.: De la tradition orale aux reseaux de communication : la tradition orale (1998) 0.01
    0.013547006 = product of:
      0.09482904 = sum of:
        0.09482904 = weight(_text_:interpretation in 3994) [ClassicSimilarity], result of:
          0.09482904 = score(doc=3994,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.4430163 = fieldWeight in 3994, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3994)
      0.14285715 = coord(1/7)
    
    Abstract
    Summarises of a selection of the presentations and workshops under one of the main themes at the Association of Swiss Libraries and Librarians congress held in Yverdon, Sept 1998. Sessions covered comprise: workshop on stories in libraries (history of the tradition in French libraries and criteria for selecting material); oral and written traditions (presentation on continuing existence of various schools of interpretation e.g. mythological, anthropological, in relation to the importance of individual contact); and listening - reading - writing (presentation on links between these 3 forms of communication in the context of the challenge for libraries in the field of children's education)
  10. Thelwall, M.: ¬A comparison of sources of links for academic Web impact factor calculations (2002) 0.01
    0.01161172 = product of:
      0.081282035 = sum of:
        0.081282035 = weight(_text_:interpretation in 4474) [ClassicSimilarity], result of:
          0.081282035 = score(doc=4474,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.37972826 = fieldWeight in 4474, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=4474)
      0.14285715 = coord(1/7)
    
    Abstract
    There has been much recent interest in extracting information from collections of Web links. One tool that has been used is Ingwersen's Web impact factor. It has been demonstrated that several versions of this metric can produce results that correlate with research ratings of British universities showing that, despite being a measure of a purely Internet phenomenon, the results are susceptible to a wider interpretation. This paper addresses the question of which is the best possible domain to count backlinks from, if research is the focus of interest. WIFs for British universities calculated from several different source domains are compared, primarily the .edu, .ac.uk and .uk domains, and the entire Web. The results show that all four areas produce WIFs that correlate strongly with research ratings, but that none produce incontestably superior figures. It was also found that the WIF was less able to differentiate in more homogeneous subsets of universities, although positive results are still possible.
  11. Oppenheim, C.; Selby, K.: Access to information on the World Wide Web for blind and visually impaired people (1999) 0.01
    0.01161172 = product of:
      0.081282035 = sum of:
        0.081282035 = weight(_text_:interpretation in 727) [ClassicSimilarity], result of:
          0.081282035 = score(doc=727,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.37972826 = fieldWeight in 727, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=727)
      0.14285715 = coord(1/7)
    
    Abstract
    The Internet gives access for blind and visually impaired users to previously unobtainable information via Braille or speech synthesis interpretation. This paper looks at how three search engines, AltaVista, Yahoo! and Infoseek presented their information to a small group of visually impaired and blind users and how accessible individual Internet pages are. Two participants had varying levels of partial sight and two Subjects were blind and solely reliant on speech synthesis output. Subjects were asked for feedback on interface design at various stages of their search and any problems they encountered were noted. The barriers to access that were found appear to come about by lack of knowledge and thought by the page designers themselves. An accessible page does not have to be dull. By adhering to simple guidelines, visually impaired users would be able to access information more effectively than would otherwise be possible. Visually disabled people would also have the same opportunity to access knowledge as their sighted colleagues.
  12. Bodoff, D.; Raban, D.: User models as revealed in web-based research services (2012) 0.01
    0.01161172 = product of:
      0.081282035 = sum of:
        0.081282035 = weight(_text_:interpretation in 76) [ClassicSimilarity], result of:
          0.081282035 = score(doc=76,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.37972826 = fieldWeight in 76, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=76)
      0.14285715 = coord(1/7)
    
    Abstract
    The user-centered approach to information retrieval emphasizes the importance of a user model in determining what information will be most useful to a particular user, given their context. Mediated search provides an opportunity to elaborate on this idea, as an intermediary's elicitations reveal what aspects of the user model they think are worth inquiring about. However, empirical evidence is divided over whether intermediaries actually work to develop a broadly conceived user model. Our research revisits the issue in a web research services setting, whose characteristics are expected to result in more thorough user modeling on the part of intermediaries. Our empirical study confirms that intermediaries engage in rich user modeling. While intermediaries behave differently across settings, our interpretation is that the underlying user model characteristics that intermediaries inquire about in our setting are applicable to other settings as well.
  13. Umstätter, W.: Anwendung von Internet : eine Einführung (1995) 0.01
    0.009677563 = product of:
      0.06774294 = sum of:
        0.06774294 = product of:
          0.13548587 = sum of:
            0.13548587 = weight(_text_:anwendung in 1928) [ClassicSimilarity], result of:
              0.13548587 = score(doc=1928,freq=2.0), product of:
                0.1809185 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.037368443 = queryNorm
                0.74887794 = fieldWeight in 1928, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1928)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
  14. Lucas, W.; Topi, H.: Form and function : the impact of query term and operator usage on Web search results (2002) 0.01
    0.009676432 = product of:
      0.067735024 = sum of:
        0.067735024 = weight(_text_:interpretation in 198) [ClassicSimilarity], result of:
          0.067735024 = score(doc=198,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 198, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=198)
      0.14285715 = coord(1/7)
    
    Abstract
    Conventional wisdom holds that queries to information retrieval systems will yield more relevant results if they contain multiple topic-related terms and use Boolean and phrase operators to enhance interpretation. Although studies have shown that the users of Web-based search engines typically enter short, term-based queries and rarely use search operators, little information exists concerning the effects of term and operator usage on the relevancy of search results. In this study, search engine users formulated queries on eight search topics. Each query was submitted to the user-specified search engine, and relevancy ratings for the retrieved pages were assigned. Expert-formulated queries were also submitted and provided a basis for comparing relevancy ratings across search engines. Data analysis based on our research model of the term and operator factors affecting relevancy was then conducted. The results show that the difference in the number of terms between expert and nonexpert searches, the percentage of matching terms between those searches, and the erroneous use of nonsupported operators in nonexpert searches explain most of the variation in the relevancy of search results. These findings highlight the need for designing search engine interfaces that provide greater support in the areas of term selection and operator usage
  15. dpa; Weizenbaum, J.: "Internet ist ein Schrotthaufen" (2005) 0.01
    0.009676432 = product of:
      0.067735024 = sum of:
        0.067735024 = weight(_text_:interpretation in 1560) [ClassicSimilarity], result of:
          0.067735024 = score(doc=1560,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 1560, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1560)
      0.14285715 = coord(1/7)
    
    Content
    "Das Internet ist nach Ansicht des bekannten US-Computerexperten und Philosophen Prof. Joseph Weizenbaum ein "Schrotthaufen" und verführt die Menschen zur Selbstüberschätzung. Weizenbaum, der in den 60er Jahren das Sprachanalyse-Programm "ELIZA" entwickelte, sprach im Rahmen einer Vortragsreihe im Computermuseum in Paderborn. "Das Ganze ist ein riesiger Misthaufen, der Perlen enthält. Aber um Perlen zu finden, muss man die richtigen Fragen stellen. Gerade das können die meisten Menschen nicht." Verlust von Kreativität Weizenbaum sagte weiter: "Wir haben die Illusion, dass wir in einer Informationsgesellschaft leben. Wir haben das Internet, wir haben die Suchmaschine Google, wir haben die Illusion, uns stehe das gesamte Wissen der Menschheit zur Verfügung." Kein Computer könne dem Menschen die eigentliche Information liefern. "Es ist die Arbeit der Interpretation im Kopf, die aus den Zeichen, die Computer anzeigen, eine Information macht." Der emeritierte Forscher des Massachusetts Institute of Technology kritisierte scharf das frühe Heranführen von Kindern an den Computer: "Computer für Kinder - das macht Apfelmus aus Gehirnen." Die Folge sei unter anderem, dass Studenten zum Teil bereits Programmen das Zusammenstellen der Hausarbeit überlasse. Menschen lernten in den Medien eine Hand voll Klischees, die auch in der Politik-Berichterstattung immer wieder auftauchten. Der Mangel an echter Aussage erkläre etwa den knappen Wahlausgang der USA, dessen 50:50-Proporz Ahnlichkeit mit Zufallsexperimenten habe."
  16. Thelwall, M.; Vaughan, L.; Björneborn, L.: Webometrics (2004) 0.01
    0.009676432 = product of:
      0.067735024 = sum of:
        0.067735024 = weight(_text_:interpretation in 4279) [ClassicSimilarity], result of:
          0.067735024 = score(doc=4279,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 4279, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4279)
      0.14285715 = coord(1/7)
    
    Abstract
    Webometrics, the quantitative study of Web-related phenomena, emerged from the realization that methods originally designed for bibliometric analysis of scientific journal article citation patterns could be applied to the Web, with commercial search engines providing the raw data. Almind and Ingwersen (1997) defined the field and gave it its name. Other pioneers included Rodriguez Gairin (1997) and Aguillo (1998). Larson (1996) undertook exploratory link structure analysis, as did Rousseau (1997). Webometrics encompasses research from fields beyond information science such as communication studies, statistical physics, and computer science. In this review we concentrate on link analysis, but also cover other aspects of webometrics, including Web log fle analysis. One theme that runs through this chapter is the messiness of Web data and the need for data cleansing heuristics. The uncontrolled Web creates numerous problems in the interpretation of results, for instance, from the automatic creation or replication of links. The loose connection between top-level domain specifications (e.g., com, edu, and org) and their actual content is also a frustrating problem. For example, many .com sites contain noncommercial content, although com is ostensibly the main commercial top-level domain. Indeed, a skeptical researcher could claim that obstacles of this kind are so great that all Web analyses lack value. As will be seen, one response to this view, a view shared by critics of evaluative bibliometrics, is to demonstrate that Web data correlate significantly with some non-Web data in order to prove that the Web data are not wholly random. A practical response has been to develop increasingly sophisticated data cleansing techniques and multiple data analysis methods.
  17. Madden, A.D.; Ford, N.J.; Miller, D.; Levy, P.: Children's use of the internet for information-seeking : what strategies do they use, and what factors affect their performance? (2006) 0.01
    0.009676432 = product of:
      0.067735024 = sum of:
        0.067735024 = weight(_text_:interpretation in 615) [ClassicSimilarity], result of:
          0.067735024 = score(doc=615,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 615, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=615)
      0.14285715 = coord(1/7)
    
    Abstract
    Purpose - A common criticism of research into information seeking on the internet is that information seekers are restricted by the demands of the researcher. Another criticism is that the search topics, are often imposed by the researcher, and; particularly when working with children, domain knowledge could be as important as information-seeking skills. The research reported here attempts to address both these problems. Design/methodology/approach - A total of 15 children, aged 11 to 16, were each set three "think aloud" internet searches. In the first, they were asked to recall the last time they had sought information on the internet, and to repeat the search. For the second, they were given a word, asked to interpret it, then asked to search for their interpretation. For the third, they were asked to recall the last time they had been unsuccessful in a search, and to repeat the search. While performing each task, the children were encouraged to explain their actions. Findings - The paper finds that the factors that determined a child's ability to search successfully appeared to be: the amount of experience the child had of using the internet; the amount of guidance, both from adults and from peers; and the child's ability to explore the virtual environment, and to use the tools available for so doing. Originality/value - Many of the searches performed by participants in this paper were not related to schoolwork, and so some of the search approaches differed from those taught by teachers. Instead, they evolved through exploration and exchange of ideas. Further studies of this sort could provide insights of value to designers of web environments.
  18. Wijnhoven, F.; Brinkhuis, M.: Internet information triangulation : design theory and prototype evaluation (2015) 0.01
    0.009676432 = product of:
      0.067735024 = sum of:
        0.067735024 = weight(_text_:interpretation in 1724) [ClassicSimilarity], result of:
          0.067735024 = score(doc=1724,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 1724, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1724)
      0.14285715 = coord(1/7)
    
    Abstract
    Many discussions exist regarding the credibility of information on the Internet. Similar discussions happen on the interpretation of social scientific research data, for which information triangulation has been proposed as a useful method. In this article, we explore a design theory-consisting of a kernel theory, meta-requirements, and meta-designs-for software and services that triangulate Internet information. The kernel theory identifies 5 triangulation methods based on Churchman's inquiring systems theory and related meta-requirements. These meta-requirements are used to search for existing software and services that contain design features for Internet information triangulation tools. We discuss a prototyping study of the use of an information triangulator among 72 college students and how their use contributes to their opinion formation. From these findings, we conclude that triangulation tools can contribute to opinion formation by information consumers, especially when the tool is not a mere fact checker but includes the search and delivery of alternative views. Finally, we discuss other empirical propositions and design propositions for an agenda for triangulator developers and researchers. In particular, we propose investment in theory triangulation, that is, tools to automatically detect ethically and theoretically alternative information and views.
  19. Schmidt, M.: WWW - eine Erfindung des "alten Europa" : Vom Elektronengehirn zum world Wide Web - Inzwichen 620 Millionen Internetnutzer weltweit (2003) 0.01
    0.008423126 = product of:
      0.058961883 = sum of:
        0.058961883 = sum of:
          0.03871025 = weight(_text_:anwendung in 3372) [ClassicSimilarity], result of:
            0.03871025 = score(doc=3372,freq=2.0), product of:
              0.1809185 = queryWeight, product of:
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.037368443 = queryNorm
              0.21396513 = fieldWeight in 3372, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.03125 = fieldNorm(doc=3372)
          0.020251632 = weight(_text_:22 in 3372) [ClassicSimilarity], result of:
            0.020251632 = score(doc=3372,freq=2.0), product of:
              0.13085791 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.037368443 = queryNorm
              0.15476047 = fieldWeight in 3372, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=3372)
      0.14285715 = coord(1/7)
    
    Content
    "Das World Wide, Web hat, wen wundert es, eine Vorgeschichte. Und zwar, und da staunt der Laie denn doch, im Internet. World Wide Web, Internet - ist denn das nicht dasselbe? Nein. Ist es nicht. Das WWW ist eine Funktion des Internet. Eine von vielen. So wie Email und Chat. Die Geschichte ist die. In den 40er Jahren des 20. Jahrhunderts wurden die ersten EDV-Anlagen gebaut. In den 60er und 70er Jahren gehörten riesige Computer mit Lochkarten, Magnetbändern und Endlos-Ausdrucken zu den Prestige-Objekten von Unis, , Banken und Firmen, ehrfürchtig "Elektronengehir ne" oder ironisch "Blechtrottel" genannt. 1957 hatte das US-Verteidigungsministerium unter dem Eindruck des Sputnik-Schocks die Forschungsinstitution ARPA gegründet. Zwölf jahre später entstand das ARPAnet - ein Projekt zur Entwicklung eines Forschungsnetzes, das Universitäten und zivile wie militärische US-Einrichtungen verband. Dass die treibende Kraft das Bedürfnis gewesen sein soll, das Netz vor Bomben zu schützen, ist wohl ein Gerücht. Nach Larry Roberts, einem der "Väter" des Internet, kam dieses Argument erst später auf. Es erwies sich als nützlich für das Aquirieren von Forschungsgeldern... Die globale elektronische Kommunikation blieb nicht auf die Welt der Akademiker beschränkt. Das Big Business begann die Lunte zu riechen. Fast überall, wanderten die Handelsmärkte vom Parkett und den Wandtafeln auf die Computerbildschirme: Das Internet war mittlerweile zu einem brauchbaren Datenübermittlungsmedium geworden, hatte aber noch einen Nachteil: Man konnte Informationen nur finden, wenn man wusste, wo man suchen muss. In den Folgejahren kam es zu einer Explosion in der Entwicklung neuer Navigationsprotokolle, es entstand als bedeutendste Entwicklung das WWW -übrigens im "alten Europa", am europäischen Forschungszentrum für Teilchenphysik (CERN) in Genf. Erfunden hat es Tim Berners-Lee. Seine Erfindung war eine doppelte. Zunächst die Anwendung des schon lange bekannten Hypertextprinzipes (Ted Nelson, 1972) auf elektronische Dokumente - in der Hypertext Markup Language (HTML). Und dann eine einfache von Herrn und Frau Jedermann bedienbare grafische Oberfläche, die diese Dokumente, austauscht und zur Anzeige bringt (über das Hypertext Transport Protokoll - HTTP). Die allererste Software hieß "Mosaic" und wird heute Browser genannt. Im April 1993 gab das CERN die World-Wide-Web-Software für. die Öffentlichkeit frei, zur unbeschränkten und kostenlosen Nutzung. Heute umfasst das WWW über 32 Millionen registrierte Domain-Namen, davon 5 Millionen .deDomains, und der weltweite Zugang zum Internet erreichte Ende 2002 über 620 Millionen Nutzer."
    Date
    3. 5.1997 8:44:22
  20. Lutz, H.: Back to business : was CompuServe Unternehmen bietet (1997) 0.01
    0.008182895 = product of:
      0.057280265 = sum of:
        0.057280265 = product of:
          0.11456053 = sum of:
            0.11456053 = weight(_text_:22 in 6569) [ClassicSimilarity], result of:
              0.11456053 = score(doc=6569,freq=4.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.8754574 = fieldWeight in 6569, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=6569)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 2.1997 19:50:29
    Source
    Cogito. 1997, H.1, S.22-23

Languages

  • d 215
  • e 188
  • f 8
  • sp 1
  • More… Less…

Types