Search (481 results, page 1 of 25)

  • × theme_ss:"Internet"
  1. Capps, M.; Ladd, B.; Stotts, D.: Enhanced graph models in the Web : multi-client, multi-head, multi-tail browsing (1996) 0.03
    0.032156922 = product of:
      0.112549216 = sum of:
        0.09482904 = weight(_text_:interpretation in 5860) [ClassicSimilarity], result of:
          0.09482904 = score(doc=5860,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.4430163 = fieldWeight in 5860, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5860)
        0.017720178 = product of:
          0.035440356 = sum of:
            0.035440356 = weight(_text_:22 in 5860) [ClassicSimilarity], result of:
              0.035440356 = score(doc=5860,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.2708308 = fieldWeight in 5860, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5860)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Richer graph models permit authors to 'program' the browsing behaviour they want WWW readers to see by turning the hypertext into a hyperprogram with specific semantics. Multiple browsing streams can be started under the author's control and then kept in step through the synchronization mechanisms provided by the graph model. Adds a Semantic Web Graph Layer (SWGL) which allows dynamic interpretation of link and node structures according to graph models. Details the SWGL and its architecture, some sample protocol implementations, and the latest extensions to MHTML
    Date
    1. 8.1996 22:08:06
  2. Däßler, R.; Palm, H.: Virtuelle Informationsräume mit VRML : Informationen recherchieren und präsentieren in 3D (1997) 0.03
    0.027563075 = product of:
      0.09647076 = sum of:
        0.081282035 = weight(_text_:interpretation in 2280) [ClassicSimilarity], result of:
          0.081282035 = score(doc=2280,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.37972826 = fieldWeight in 2280, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=2280)
        0.015188723 = product of:
          0.030377446 = sum of:
            0.030377446 = weight(_text_:22 in 2280) [ClassicSimilarity], result of:
              0.030377446 = score(doc=2280,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.23214069 = fieldWeight in 2280, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2280)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Die Recherche nach Informationen ist eine der wichtigsten Tätigkeiten bei der Arbeit mit dem Internet. Bisher geschieht dies hauptsächlich textbasiert mit Hilfe von Suchmaschinen oder thematische Katalogen. ein neuer Zugang zu Informationen ist die raumbezogene Visualisierung, eine Technik, die bei der Darstellung und Interpretation von wissenschaftlichen Daten heutzutage zum Standard gehört. Die 3D-Visualisierung läßt sich aber auch einsetzen, um Textinformationen zu recherchieren und zu präsentieren. Mit ihr werden virtuelle Informationsräume erzeugt, die man wie mit einem Flugsimulator durchfliegen kann, um nach Informationen zu suchen. Wie solche 3D-Benutzerschnittstellen aussehen und wie man sie mit Hilfe von VRML erzeugen kann, ist das Thema dieses Buches
    Date
    17. 7.2002 16:32:22
  3. Kaeser, E.: ¬Das postfaktische Zeitalter (2016) 0.02
    0.019490037 = product of:
      0.068215124 = sum of:
        0.05747507 = weight(_text_:interpretation in 3080) [ClassicSimilarity], result of:
          0.05747507 = score(doc=3080,freq=4.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.2685084 = fieldWeight in 3080, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3080)
        0.01074005 = product of:
          0.0214801 = sum of:
            0.0214801 = weight(_text_:22 in 3080) [ClassicSimilarity], result of:
              0.0214801 = score(doc=3080,freq=4.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.16414827 = fieldWeight in 3080, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3080)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Content
    "Es gibt Daten, Informationen und Fakten. Wenn man mir eine Zahlenreihe vorsetzt, dann handelt es sich um Daten: unterscheidbare Einheiten, im Fachjargon: Items. Wenn man mir sagt, dass diese Items stündliche Temperaturangaben der Aare im Berner Marzilibad bedeuten, dann verfüge ich über Information - über interpretierte Daten. Wenn man mir sagt, dies seien die gemessenen Aaretemperaturen am 22. August 2016 im Marzili, dann ist das ein Faktum: empirisch geprüfte interpretierte Daten. Dieser Dreischritt - Unterscheiden, Interpretieren, Prüfen - bildet quasi das Bindemittel des Faktischen, «the matter of fact». Wir alle führen den Dreischritt ständig aus und gelangen so zu einem relativ verlässlichen Wissen und Urteilsvermögen betreffend die Dinge des Alltags. Aber wie schon die Kurzcharakterisierung durchblicken lässt, bilden Fakten nicht den Felsengrund der Realität. Sie sind kritikanfällig, sowohl von der Interpretation wie auch von der Prüfung her gesehen. Um bei unserem Beispiel zu bleiben: Es kann durchaus sein, dass man uns zwei unterschiedliche «faktische» Temperaturverläufe der Aare am 22. August 2016 vorsetzt.
    - Das Amen des postmodernen Denkens Was nun? Wir führen den Unterschied zum Beispiel auf Ablesefehler (also auf falsche Interpretation) zurück oder aber auf verschiedene Messmethoden. Sofort ist ein Deutungsspielraum offen. Nietzsches berühmtes Wort hallt wider, dass es nur Interpretationen, keine Fakten gebe. Oder wie es im Englischen heisst: «Facts are factitious» - Fakten sind Artefakte, sie sind künstlich. Diese Ansicht ist quasi das Amen des postmodernen Denkens. Und als besonders tückisch an ihr entpuppt sich ihre Halbwahrheit. Es stimmt, dass Fakten oft das Ergebnis eines langwierigen Erkenntnisprozesses sind, vor allem heute, wo wir es immer mehr mit Aussagen über komplexe Systeme wie Migrationsdynamik, Meteorologie oder Märkte zu tun bekommen. Der Interpretationsdissens unter Experten ist ja schon fast sprichwörtlich.
  4. Moll, S.: ¬Der Urknall des Internets : 20 Jahre WWW (2011) 0.02
    0.01785056 = product of:
      0.12495391 = sum of:
        0.12495391 = weight(_text_:quantenphysik in 3720) [ClassicSimilarity], result of:
          0.12495391 = score(doc=3720,freq=2.0), product of:
            0.34748885 = queryWeight, product of:
              9.298992 = idf(docFreq=10, maxDocs=44218)
              0.037368443 = queryNorm
            0.35959113 = fieldWeight in 3720, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              9.298992 = idf(docFreq=10, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3720)
      0.14285715 = coord(1/7)
    
    Content
    "Alle großen Erfindungen der Menschheitsgeschichte haben einen Entstehungsmythos. Einsteins Trambahnfahrt durch Zürich beispielsweise oder der berühmte Apfel, der Newton angeblich auf den Kopf gefallen ist. Als Tim Berners-Lee, damals Physikstudent in Manchester, Mitte der 70er Jahre mit seinem Vater in einem Stadtpark unter einem Baum saß, unterhielten sich die beiden darüber, dass sie doch in ihrem Garten auch einen solchen Baum gebrauchen könnten. Der Vater, ein Mathematiker, der an einem der ersten kommerziell genutzten Computer der Welt arbeitete, bemerkte, dass die Fähigkeit, die abstrakte Idee eines schattigen Baumes auf einen anderen Ort zu übertragen, doch eine einmalig menschliche sei. Computer könnten so etwas nicht. Das Problem ließ Berners-Lee nicht los. Deshalb suchte er, während er in den 80er Jahren als Berater am europäischen Labor für Quantenphysik (CERN) in der Schweiz arbeitete, noch immer nach einem Weg, seinem Computer beizubringen, Verbindungen zwischen den disparaten Dokumenten und Notizen auf seiner Festplatte herzustellen. Er entwarf deshalb ein System, das heute so alltäglich ist, wie Kleingeld. Lee stellte eine direkte Verknüpfung her zwischen Wörtern und Begriffen in Dokumenten und den gleichen Begriffen in anderen Dokumenten: Der Link war geboren.
  5. Access to electronic information, services and networks : an interpretation of the LIBRARY BILL OF RIGHTS (1995) 0.02
    0.01642145 = product of:
      0.11495014 = sum of:
        0.11495014 = weight(_text_:interpretation in 4713) [ClassicSimilarity], result of:
          0.11495014 = score(doc=4713,freq=4.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.5370168 = fieldWeight in 4713, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=4713)
      0.14285715 = coord(1/7)
    
    Abstract
    At the 1996 Midwinter Meeting of the 57.000-member ALA in San Antonio, ALA affirms user rights in cyberspace; and calls on the US Congress to protect public access to information during the shift from print to electronic publishing. The latest ALA News over the net reported what Betty J. Turock, president of the ALA said, 'free access to information is essential to a democracy. Our concern as professional librarians is that new technology not become a barrier for members of the public.' The new 'Access to Electronic Information, Services and Network: an interpretation of the Library Bill of Rights' was adopted by the ALA Council at the Midwinter Meeting, and will have profound implications and use for many libraries and librarians in the months to come. Because of its significance and potential impact, the next of this document has been downloaded from the ALA's Web site at http://www.ala.org to facilitate the use by readers of this journal
  6. Hochheiser, H.; Shneiderman, B.: Using interactive visualizations of WWW log data to characterize access patterns and inform site design (2001) 0.02
    0.01642145 = product of:
      0.11495014 = sum of:
        0.11495014 = weight(_text_:interpretation in 5765) [ClassicSimilarity], result of:
          0.11495014 = score(doc=5765,freq=4.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.5370168 = fieldWeight in 5765, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=5765)
      0.14285715 = coord(1/7)
    
    Abstract
    HTTP server log files provide Web site operators with substantial detail regarding the visitors to their sites. Interest in interpreting this data has spawned an active market for software packages that summarize and analyze this data, providing histograms, pie graphs, and other charts summarizing usage patterns. Although useful, these summaries obscure useful information and restrict users to passive interpretation of static displays. Interactive visualizations can be used to provide users with greater abilities to interpret and explore Web log data. By combining two-dimensional displays of thousands of individual access requests, color, and size coding for additional attributes, and facilities for zooming and filtering, these visualizations provide capabilities for examining data that exceed those of traditional Web log analysis tools. We introduce a series of interactive visualizations that can be used to explore server data across various dimensions. Possible uses of these visualizations are discussed, and difficulties of data collection, presentation, and interpretation are explored
  7. Hochheiser, H.; Shneiderman, B.: Understanding patterns of user visits to Web sites : Interactive Starfield visualizations of WWW log data (1999) 0.02
    0.01642145 = product of:
      0.11495014 = sum of:
        0.11495014 = weight(_text_:interpretation in 6713) [ClassicSimilarity], result of:
          0.11495014 = score(doc=6713,freq=4.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.5370168 = fieldWeight in 6713, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=6713)
      0.14285715 = coord(1/7)
    
    Abstract
    HTTP server log files provide Web site operators with substantial detail regarding the visitors to their sites. Interest in interpreting this data has spawned an active market for software packages that summarize and analyze this data, providing histograms, pie graphs, and other charts summarizing usage patterns. While useful, these summaries obscure useful information and restrict users to passive interpretation of static displays. Interactive starfield visualizations can be used to provide users with greater abilities to interpret and explore web log data. By combining two-dimensional displays of thousands of individual access requests, color and size coding for additional attributes, and facilities for zooming and filtering, these visualizations provide capabilities for examining data that exceed those of traditional web log analysis tools. We introduce a series of interactive starfield visualizations, which can be used to explore server data across various dimensions. Possible uses of these visualizations are discussed, and difficulties of data collection, presentation, and interpretation are explored
  8. Thelwall, M.; Vann, K.; Fairclough, R.: Web issue analysis : an integrated water resource management case study (2006) 0.02
    0.01642145 = product of:
      0.11495014 = sum of:
        0.11495014 = weight(_text_:interpretation in 5906) [ClassicSimilarity], result of:
          0.11495014 = score(doc=5906,freq=4.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.5370168 = fieldWeight in 5906, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=5906)
      0.14285715 = coord(1/7)
    
    Abstract
    In this article Web issue analysis is introduced as a new technique to investigate an issue as reflected on the Web. The issue chosen, integrated water resource management (IWRM), is a United Nations-initiated paradigm for managing water resources in an international context, particularly in developing nations. As with many international governmental initiatives, there is a considerable body of online information about it: 41.381 hypertext markup language (HTML) pages and 28.735 PDF documents mentioning the issue were downloaded. A page uniform resource locator (URL) and link analysis revealed the international and sectoral spread of IWRM. A noun and noun phrase occurrence analysis was used to identify the issues most commonly discussed, revealing some unexpected topics such as private sector and economic growth. Although the complexity of the methods required to produce meaningful statistics from the data is disadvantageous to easy interpretation, it was still possible to produce data that could be subject to a reasonably intuitive interpretation. Hence Web issue analysis is claimed to be a useful new technique for information science.
  9. Ma, Y.: Internet: the global flow of information (1995) 0.02
    0.0154822925 = product of:
      0.10837604 = sum of:
        0.10837604 = weight(_text_:interpretation in 4712) [ClassicSimilarity], result of:
          0.10837604 = score(doc=4712,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.5063043 = fieldWeight in 4712, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0625 = fieldNorm(doc=4712)
      0.14285715 = coord(1/7)
    
    Abstract
    Colours, icons, graphics, hypertext links and other multimedia elements are variables that affect information search strategies and information seeking behaviour. These variables are culturally constructed and represented and are subject to individual and community interpretation. Hypothesizes that users in different communities (in intercultural or multicultural context) will interpret differently the meanings of the multimedia objects on the Internet. Users' interpretations of multimedia objects may differ from the intentions of the designers. A study in this area is being undertaken
  10. Court, J.; Lovis, G.; Fassbind-Eigenheer, R.: De la tradition orale aux reseaux de communication : la tradition orale (1998) 0.01
    0.013547006 = product of:
      0.09482904 = sum of:
        0.09482904 = weight(_text_:interpretation in 3994) [ClassicSimilarity], result of:
          0.09482904 = score(doc=3994,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.4430163 = fieldWeight in 3994, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3994)
      0.14285715 = coord(1/7)
    
    Abstract
    Summarises of a selection of the presentations and workshops under one of the main themes at the Association of Swiss Libraries and Librarians congress held in Yverdon, Sept 1998. Sessions covered comprise: workshop on stories in libraries (history of the tradition in French libraries and criteria for selecting material); oral and written traditions (presentation on continuing existence of various schools of interpretation e.g. mythological, anthropological, in relation to the importance of individual contact); and listening - reading - writing (presentation on links between these 3 forms of communication in the context of the challenge for libraries in the field of children's education)
  11. Thelwall, M.: ¬A comparison of sources of links for academic Web impact factor calculations (2002) 0.01
    0.01161172 = product of:
      0.081282035 = sum of:
        0.081282035 = weight(_text_:interpretation in 4474) [ClassicSimilarity], result of:
          0.081282035 = score(doc=4474,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.37972826 = fieldWeight in 4474, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=4474)
      0.14285715 = coord(1/7)
    
    Abstract
    There has been much recent interest in extracting information from collections of Web links. One tool that has been used is Ingwersen's Web impact factor. It has been demonstrated that several versions of this metric can produce results that correlate with research ratings of British universities showing that, despite being a measure of a purely Internet phenomenon, the results are susceptible to a wider interpretation. This paper addresses the question of which is the best possible domain to count backlinks from, if research is the focus of interest. WIFs for British universities calculated from several different source domains are compared, primarily the .edu, .ac.uk and .uk domains, and the entire Web. The results show that all four areas produce WIFs that correlate strongly with research ratings, but that none produce incontestably superior figures. It was also found that the WIF was less able to differentiate in more homogeneous subsets of universities, although positive results are still possible.
  12. Oppenheim, C.; Selby, K.: Access to information on the World Wide Web for blind and visually impaired people (1999) 0.01
    0.01161172 = product of:
      0.081282035 = sum of:
        0.081282035 = weight(_text_:interpretation in 727) [ClassicSimilarity], result of:
          0.081282035 = score(doc=727,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.37972826 = fieldWeight in 727, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=727)
      0.14285715 = coord(1/7)
    
    Abstract
    The Internet gives access for blind and visually impaired users to previously unobtainable information via Braille or speech synthesis interpretation. This paper looks at how three search engines, AltaVista, Yahoo! and Infoseek presented their information to a small group of visually impaired and blind users and how accessible individual Internet pages are. Two participants had varying levels of partial sight and two Subjects were blind and solely reliant on speech synthesis output. Subjects were asked for feedback on interface design at various stages of their search and any problems they encountered were noted. The barriers to access that were found appear to come about by lack of knowledge and thought by the page designers themselves. An accessible page does not have to be dull. By adhering to simple guidelines, visually impaired users would be able to access information more effectively than would otherwise be possible. Visually disabled people would also have the same opportunity to access knowledge as their sighted colleagues.
  13. Bodoff, D.; Raban, D.: User models as revealed in web-based research services (2012) 0.01
    0.01161172 = product of:
      0.081282035 = sum of:
        0.081282035 = weight(_text_:interpretation in 76) [ClassicSimilarity], result of:
          0.081282035 = score(doc=76,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.37972826 = fieldWeight in 76, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=76)
      0.14285715 = coord(1/7)
    
    Abstract
    The user-centered approach to information retrieval emphasizes the importance of a user model in determining what information will be most useful to a particular user, given their context. Mediated search provides an opportunity to elaborate on this idea, as an intermediary's elicitations reveal what aspects of the user model they think are worth inquiring about. However, empirical evidence is divided over whether intermediaries actually work to develop a broadly conceived user model. Our research revisits the issue in a web research services setting, whose characteristics are expected to result in more thorough user modeling on the part of intermediaries. Our empirical study confirms that intermediaries engage in rich user modeling. While intermediaries behave differently across settings, our interpretation is that the underlying user model characteristics that intermediaries inquire about in our setting are applicable to other settings as well.
  14. Schweibenz, W.; Thissen, F.: Qualität im Web : Benutzerfreundliche Webseiten durch Usability Evaluation (2003) 0.01
    0.010528908 = product of:
      0.07370235 = sum of:
        0.07370235 = sum of:
          0.048387814 = weight(_text_:anwendung in 767) [ClassicSimilarity], result of:
            0.048387814 = score(doc=767,freq=2.0), product of:
              0.1809185 = queryWeight, product of:
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.037368443 = queryNorm
              0.2674564 = fieldWeight in 767, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.0390625 = fieldNorm(doc=767)
          0.02531454 = weight(_text_:22 in 767) [ClassicSimilarity], result of:
            0.02531454 = score(doc=767,freq=2.0), product of:
              0.13085791 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.037368443 = queryNorm
              0.19345059 = fieldWeight in 767, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=767)
      0.14285715 = coord(1/7)
    
    Abstract
    Für Webseiten ist, wie für alle interaktiven Anwendungen vom einfachen Automaten bis zur komplexen Software, die Benutzerfreundlichkeit von zentraler Bedeutung. Allerdings wird eine sinnvolle Benutzung von Informationsangeboten im World Wide Web häufig durch "cooles Design" unnötig erschwert, weil zentrale Punkte der Benutzerfreundlichkeit (Usability) vernachlässigt werden. Durch Usability Evaluation kann die Benutzerfreundlichkeit von Webseiten und damit auch die Akzeptanz bei den Benutzern verbessert werden. Ziel ist die Gestaltung von ansprechenden benutzerfreundlichen Webangeboten, die den Benutzern einen effektiven und effizienten Dialog ermöglichen. Das Buch bietet eine praxisorientierte Einführung in die Web Usability Evaluation und beschreibt die Anwendung ihrer verschiedenen Methoden.
    Date
    22. 3.2008 14:24:08
  15. Rüping, U.: Anwendung und Nutzung von Internet in ausgewählten Hochschulbibliotheken der Bundesrepublik Deutschland (1994) 0.01
    0.009677563 = product of:
      0.06774294 = sum of:
        0.06774294 = product of:
          0.13548587 = sum of:
            0.13548587 = weight(_text_:anwendung in 7471) [ClassicSimilarity], result of:
              0.13548587 = score(doc=7471,freq=2.0), product of:
                0.1809185 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.037368443 = queryNorm
                0.74887794 = fieldWeight in 7471, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.109375 = fieldNorm(doc=7471)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
  16. Umstätter, W.: Anwendung von Internet : eine Einführung (1995) 0.01
    0.009677563 = product of:
      0.06774294 = sum of:
        0.06774294 = product of:
          0.13548587 = sum of:
            0.13548587 = weight(_text_:anwendung in 1928) [ClassicSimilarity], result of:
              0.13548587 = score(doc=1928,freq=2.0), product of:
                0.1809185 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.037368443 = queryNorm
                0.74887794 = fieldWeight in 1928, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1928)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
  17. Lucas, W.; Topi, H.: Form and function : the impact of query term and operator usage on Web search results (2002) 0.01
    0.009676432 = product of:
      0.067735024 = sum of:
        0.067735024 = weight(_text_:interpretation in 198) [ClassicSimilarity], result of:
          0.067735024 = score(doc=198,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 198, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=198)
      0.14285715 = coord(1/7)
    
    Abstract
    Conventional wisdom holds that queries to information retrieval systems will yield more relevant results if they contain multiple topic-related terms and use Boolean and phrase operators to enhance interpretation. Although studies have shown that the users of Web-based search engines typically enter short, term-based queries and rarely use search operators, little information exists concerning the effects of term and operator usage on the relevancy of search results. In this study, search engine users formulated queries on eight search topics. Each query was submitted to the user-specified search engine, and relevancy ratings for the retrieved pages were assigned. Expert-formulated queries were also submitted and provided a basis for comparing relevancy ratings across search engines. Data analysis based on our research model of the term and operator factors affecting relevancy was then conducted. The results show that the difference in the number of terms between expert and nonexpert searches, the percentage of matching terms between those searches, and the erroneous use of nonsupported operators in nonexpert searches explain most of the variation in the relevancy of search results. These findings highlight the need for designing search engine interfaces that provide greater support in the areas of term selection and operator usage
  18. dpa; Weizenbaum, J.: "Internet ist ein Schrotthaufen" (2005) 0.01
    0.009676432 = product of:
      0.067735024 = sum of:
        0.067735024 = weight(_text_:interpretation in 1560) [ClassicSimilarity], result of:
          0.067735024 = score(doc=1560,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 1560, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1560)
      0.14285715 = coord(1/7)
    
    Content
    "Das Internet ist nach Ansicht des bekannten US-Computerexperten und Philosophen Prof. Joseph Weizenbaum ein "Schrotthaufen" und verführt die Menschen zur Selbstüberschätzung. Weizenbaum, der in den 60er Jahren das Sprachanalyse-Programm "ELIZA" entwickelte, sprach im Rahmen einer Vortragsreihe im Computermuseum in Paderborn. "Das Ganze ist ein riesiger Misthaufen, der Perlen enthält. Aber um Perlen zu finden, muss man die richtigen Fragen stellen. Gerade das können die meisten Menschen nicht." Verlust von Kreativität Weizenbaum sagte weiter: "Wir haben die Illusion, dass wir in einer Informationsgesellschaft leben. Wir haben das Internet, wir haben die Suchmaschine Google, wir haben die Illusion, uns stehe das gesamte Wissen der Menschheit zur Verfügung." Kein Computer könne dem Menschen die eigentliche Information liefern. "Es ist die Arbeit der Interpretation im Kopf, die aus den Zeichen, die Computer anzeigen, eine Information macht." Der emeritierte Forscher des Massachusetts Institute of Technology kritisierte scharf das frühe Heranführen von Kindern an den Computer: "Computer für Kinder - das macht Apfelmus aus Gehirnen." Die Folge sei unter anderem, dass Studenten zum Teil bereits Programmen das Zusammenstellen der Hausarbeit überlasse. Menschen lernten in den Medien eine Hand voll Klischees, die auch in der Politik-Berichterstattung immer wieder auftauchten. Der Mangel an echter Aussage erkläre etwa den knappen Wahlausgang der USA, dessen 50:50-Proporz Ahnlichkeit mit Zufallsexperimenten habe."
  19. Thelwall, M.; Vaughan, L.; Björneborn, L.: Webometrics (2004) 0.01
    0.009676432 = product of:
      0.067735024 = sum of:
        0.067735024 = weight(_text_:interpretation in 4279) [ClassicSimilarity], result of:
          0.067735024 = score(doc=4279,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 4279, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4279)
      0.14285715 = coord(1/7)
    
    Abstract
    Webometrics, the quantitative study of Web-related phenomena, emerged from the realization that methods originally designed for bibliometric analysis of scientific journal article citation patterns could be applied to the Web, with commercial search engines providing the raw data. Almind and Ingwersen (1997) defined the field and gave it its name. Other pioneers included Rodriguez Gairin (1997) and Aguillo (1998). Larson (1996) undertook exploratory link structure analysis, as did Rousseau (1997). Webometrics encompasses research from fields beyond information science such as communication studies, statistical physics, and computer science. In this review we concentrate on link analysis, but also cover other aspects of webometrics, including Web log fle analysis. One theme that runs through this chapter is the messiness of Web data and the need for data cleansing heuristics. The uncontrolled Web creates numerous problems in the interpretation of results, for instance, from the automatic creation or replication of links. The loose connection between top-level domain specifications (e.g., com, edu, and org) and their actual content is also a frustrating problem. For example, many .com sites contain noncommercial content, although com is ostensibly the main commercial top-level domain. Indeed, a skeptical researcher could claim that obstacles of this kind are so great that all Web analyses lack value. As will be seen, one response to this view, a view shared by critics of evaluative bibliometrics, is to demonstrate that Web data correlate significantly with some non-Web data in order to prove that the Web data are not wholly random. A practical response has been to develop increasingly sophisticated data cleansing techniques and multiple data analysis methods.
  20. Madden, A.D.; Ford, N.J.; Miller, D.; Levy, P.: Children's use of the internet for information-seeking : what strategies do they use, and what factors affect their performance? (2006) 0.01
    0.009676432 = product of:
      0.067735024 = sum of:
        0.067735024 = weight(_text_:interpretation in 615) [ClassicSimilarity], result of:
          0.067735024 = score(doc=615,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 615, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=615)
      0.14285715 = coord(1/7)
    
    Abstract
    Purpose - A common criticism of research into information seeking on the internet is that information seekers are restricted by the demands of the researcher. Another criticism is that the search topics, are often imposed by the researcher, and; particularly when working with children, domain knowledge could be as important as information-seeking skills. The research reported here attempts to address both these problems. Design/methodology/approach - A total of 15 children, aged 11 to 16, were each set three "think aloud" internet searches. In the first, they were asked to recall the last time they had sought information on the internet, and to repeat the search. For the second, they were given a word, asked to interpret it, then asked to search for their interpretation. For the third, they were asked to recall the last time they had been unsuccessful in a search, and to repeat the search. While performing each task, the children were encouraged to explain their actions. Findings - The paper finds that the factors that determined a child's ability to search successfully appeared to be: the amount of experience the child had of using the internet; the amount of guidance, both from adults and from peers; and the child's ability to explore the virtual environment, and to use the tools available for so doing. Originality/value - Many of the searches performed by participants in this paper were not related to schoolwork, and so some of the search approaches differed from those taught by teachers. Instead, they evolved through exploration and exchange of ideas. Further studies of this sort could provide insights of value to designers of web environments.

Years

Languages

  • d 252
  • e 218
  • f 8
  • el 1
  • sp 1
  • More… Less…

Types

  • a 412
  • m 43
  • s 19
  • el 12
  • x 5
  • r 3
  • b 1
  • More… Less…

Subjects

Classifications