Search (1771 results, page 2 of 89)

  • × theme_ss:"Internet"
  1. Heckner, M.: Tagging, rating, posting : studying forms of user contribution for web-based information management and information retrieval (2009) 0.03
    0.029007396 = product of:
      0.14503698 = sum of:
        0.053233504 = weight(_text_:web in 2931) [ClassicSimilarity], result of:
          0.053233504 = score(doc=2931,freq=20.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.5701118 = fieldWeight in 2931, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2931)
        0.091803476 = weight(_text_:log in 2931) [ClassicSimilarity], result of:
          0.091803476 = score(doc=2931,freq=4.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.5006735 = fieldWeight in 2931, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2931)
      0.2 = coord(2/10)
    
    Content
    The Web of User Contribution - Foundations and Principles of the Social Web - Social Tagging - Rating and Filtering of Digital Resources Empirical Analysisof User Contributions - The Functional and Linguistic Structure of Tags - A Comparative Analysis of Tags for Different Digital Resource Types - Exploring Relevance Assessments in Social IR Systems - Exploring User Contribution Within a Higher Education Scenario - Summary of Empirical Results and Implications for Designing Social Information Systems User Contribution for a Participative Information System - Social Information Architecture for an Online Help System
    Object
    Web 2.0
    RSWK
    World Wide Web 2.0 / Benutzer / Online-Publizieren / Information Retrieval / Soziale Software / Hilfesystem
    Social Tagging / Filter / Web log / World Wide Web 2.0
    Subject
    World Wide Web 2.0 / Benutzer / Online-Publizieren / Information Retrieval / Soziale Software / Hilfesystem
    Social Tagging / Filter / Web log / World Wide Web 2.0
  2. Hightower, C.; Sih, J.; Tilghman, A.: Recommendations for benchmarking Web site usage among academic libraries (1998) 0.03
    0.028715858 = product of:
      0.14357929 = sum of:
        0.05269848 = weight(_text_:web in 1478) [ClassicSimilarity], result of:
          0.05269848 = score(doc=1478,freq=10.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.5643819 = fieldWeight in 1478, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1478)
        0.09088081 = weight(_text_:log in 1478) [ClassicSimilarity], result of:
          0.09088081 = score(doc=1478,freq=2.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.49564147 = fieldWeight in 1478, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1478)
      0.2 = coord(2/10)
    
    Abstract
    Argues that comparative statistical analysis of Web site usage among similar institutions would improve librarians' ability to evaluate the effectiveness of their efforts. Discusses the factors to consider in designing such a benchmarking programme, based on a pilot study of Web site usage statistics from the raw user log files of the Web servers at 14 science and technology libraries. Recommends the formation of a centralized voluntary reporting structure for Web server usage statistics, coordinated by the Association of Research Libraries' (ARL's) Office of Statistics, which would provide a significant service to academic librarians.
  3. Weltbibliothek Internet? (1997) 0.03
    0.028448636 = product of:
      0.14224318 = sum of:
        0.13181213 = weight(_text_:schutz in 1659) [ClassicSimilarity], result of:
          0.13181213 = score(doc=1659,freq=2.0), product of:
            0.20656188 = queryWeight, product of:
              7.2195506 = idf(docFreq=87, maxDocs=44218)
              0.028611459 = queryNorm
            0.63812417 = fieldWeight in 1659, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2195506 = idf(docFreq=87, maxDocs=44218)
              0.0625 = fieldNorm(doc=1659)
        0.010431055 = product of:
          0.031293165 = sum of:
            0.031293165 = weight(_text_:29 in 1659) [ClassicSimilarity], result of:
              0.031293165 = score(doc=1659,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.31092256 = fieldWeight in 1659, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1659)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Content
    Enthält die Beiträge: LYNCH, C.: Strategien der Informationssuche; STIX, G.: Das Auffinden von Bildern; OUDET, B.: Globales Medium Englisch - welche Chancen haben andere Sprachen?; LESK, M.: Die digitale Bücherwelt; STEFIK, M.: Systeme zum Schutz des Urheberrechts
    Date
    31.12.1996 19:29:41
  4. Siever, C.M.: Multimodale Kommunikation im Social Web : Forschungsansätze und Analysen zu Text-Bild-Relationen (2015) 0.03
    0.028384332 = product of:
      0.14192165 = sum of:
        0.1181149 = weight(_text_:kommunikation in 4056) [ClassicSimilarity], result of:
          0.1181149 = score(doc=4056,freq=16.0), product of:
            0.14706601 = queryWeight, product of:
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.028611459 = queryNorm
            0.8031421 = fieldWeight in 4056, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4056)
        0.023806747 = weight(_text_:web in 4056) [ClassicSimilarity], result of:
          0.023806747 = score(doc=4056,freq=4.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.25496176 = fieldWeight in 4056, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4056)
      0.2 = coord(2/10)
    
    Abstract
    Multimodalität ist ein typisches Merkmal der Kommunikation im Social Web. Der Fokus dieses Bandes liegt auf der Kommunikation in Foto-Communitys, insbesondere auf den beiden kommunikativen Praktiken des Social Taggings und des Verfassens von Notizen innerhalb von Bildern. Bei den Tags stehen semantische Text-Bild-Relationen im Vordergrund: Tags dienen der Wissensrepräsentation, eine adäquate Versprachlichung der Bilder ist folglich unabdingbar. Notizen-Bild-Relationen sind aus pragmatischer Perspektive von Interesse: Die Informationen eines Kommunikats werden komplementär auf Text und Bild verteilt, was sich in verschiedenen sprachlichen Phänomenen niederschlägt. Ein diachroner Vergleich mit der Postkartenkommunikation sowie ein Exkurs zur Kommunikation mit Emojis runden das Buch ab.
    RSWK
    Social Media / Multimodalität / Kommunikation / Social Tagging (DNB)
    Text / Bild / Computerunterstützte Kommunikation / Soziale Software (SBB)
    Subject
    Social Media / Multimodalität / Kommunikation / Social Tagging (DNB)
    Text / Bild / Computerunterstützte Kommunikation / Soziale Software (SBB)
  5. Beitzel, S.M.; Jensen, E.C.; Chowdhury, A.; Frieder, O.; Grossman, D.: Temporal analysis of a very large topically categorized Web query log (2007) 0.03
    0.028318608 = product of:
      0.14159304 = sum of:
        0.029157192 = weight(_text_:web in 60) [ClassicSimilarity], result of:
          0.029157192 = score(doc=60,freq=6.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.3122631 = fieldWeight in 60, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=60)
        0.11243584 = weight(_text_:log in 60) [ClassicSimilarity], result of:
          0.11243584 = score(doc=60,freq=6.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.61319727 = fieldWeight in 60, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.0390625 = fieldNorm(doc=60)
      0.2 = coord(2/10)
    
    Abstract
    The authors review a log of billions of Web queries that constituted the total query traffic for a 6-month period of a general-purpose commercial Web search service. Previously, query logs were studied from a single, cumulative view. In contrast, this study builds on the authors' previous work, which showed changes in popularity and uniqueness of topically categorized queries across the hours in a day. To further their analysis, they examine query traffic on a daily, weekly, and monthly basis by matching it against lists of queries that have been topically precategorized by human editors. These lists represent 13% of the query traffic. They show that query traffic from particular topical categories differs both from the query stream as a whole and from other categories. Additionally, they show that certain categories of queries trend differently over varying periods. The authors key contribution is twofold: They outline a method for studying both the static and topical properties of a very large query log over varying periods, and they identify and examine topical trends that may provide valuable insight for improving both retrieval effectiveness and efficiency.
  6. Cooper, M.D.: Design considerations in instrumenting and monitoring Web-based information retrieval systems (1998) 0.03
    0.025889052 = product of:
      0.12944525 = sum of:
        0.037641775 = weight(_text_:web in 1793) [ClassicSimilarity], result of:
          0.037641775 = score(doc=1793,freq=10.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.40312994 = fieldWeight in 1793, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1793)
        0.091803476 = weight(_text_:log in 1793) [ClassicSimilarity], result of:
          0.091803476 = score(doc=1793,freq=4.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.5006735 = fieldWeight in 1793, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1793)
      0.2 = coord(2/10)
    
    Abstract
    The Internet Web environment opens up extraordinary opportunities for user access to information. Techniques for monitoring users and systems and for evaluating system design and performance have not kept pace with Web development. This article reviews concepts of Web operations (including browsers, clients, information retrieval applications, servers, and data communications systems) with specific attention given to how monitoring should take place and how privacy can be protected. It examines monitoring needs of users, systems designers, managers, and customer support staff and outlines measures for workload, capacity, and performance for hardware, software, and data communications systems. Finally, the article proposes a client-server design for monitoring, which involves creation of a series of server and client systems to obtain and process transaction and computer performance information. These systems include: a log server, which captures all levels of transactions and packets on the network; a monitor server, which sythesizes the log and packet data; an assistance server, which processes requests for information and help from the Web server in real time; and an accounting server, which authenticates user access to the system. A special system administrator client is proposed to control the monitoring system, as is a system information cleint to receive real-time and on-demand reports of system activity
  7. Klauser, H.: Freiheit oder totale Kontrolle : das Internet und die Grundrechte (2012) 0.02
    0.0248285 = product of:
      0.1241425 = sum of:
        0.04175992 = weight(_text_:kommunikation in 338) [ClassicSimilarity], result of:
          0.04175992 = score(doc=338,freq=2.0), product of:
            0.14706601 = queryWeight, product of:
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.028611459 = queryNorm
            0.28395358 = fieldWeight in 338, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.0390625 = fieldNorm(doc=338)
        0.08238258 = weight(_text_:schutz in 338) [ClassicSimilarity], result of:
          0.08238258 = score(doc=338,freq=2.0), product of:
            0.20656188 = queryWeight, product of:
              7.2195506 = idf(docFreq=87, maxDocs=44218)
              0.028611459 = queryNorm
            0.3988276 = fieldWeight in 338, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2195506 = idf(docFreq=87, maxDocs=44218)
              0.0390625 = fieldNorm(doc=338)
      0.2 = coord(2/10)
    
    Abstract
    Zum 7. Mal wird Anfang November 2012 das Internet Governance Forum (IGF) stattfinden, das die Steuerung und Entwicklung des Internets auf globaler Ebene thematisiert. In diesem Jahr wird der "Weltgipfel des Internet" in Baku, Aserbaidschan, stattfinden und Vertreter aus Politik, Privatwirtschaft, internationalen Organisationen und der Zivilgesellschaft zusammenführen. Auch der internationale Bibliotheksverband IFLA wird wie in den Vorjahren wieder dabei sein, um die bedeutende Rolle von Bibliotheken in der modernen Informationsgesellschaft in die Diskussionen einzubringen. Resultierend aus den beiden Weltgipfeln zur Informationsgesellschaft (WSIS) 2003 in Genf und 2005 in Tunis, die erstmals Themen wie Information und Kommunikation und die globale Informationsgesellschaft diskutierten, entstand das Internet Governance Forum, das 2006 formell vom Generalsekretär der Vereinten Nationen ohne eigene Entscheidungsbefugnis einberufen wurde und dessen Aufgabe es ist, eine Vielzahl von Themen des Internets wie Urheberrechtsfragen, Überwindung der digitalen Spaltung, Schutz der Privatsphäre und Freiheit der Meinungsäußerung im Netz zu diskutieren. Das Thema für die Konferenz in Baku lautet "Internet Governance for Sustainable Human, Economic and Social Development". Verschiedene Länder und Regionen der Welt, so auch Europa und u.a. USA, Dänemark, Italien, Russland, Ukraine, Finnland, Schweden, Spanien und auch Deutschland haben regionale und nationale IGF-Initiativen gegründet, um die Diskussionen der Jahrestreffen auf nationaler oder regionaler Ebene vorzubereiten. Am 7. Mai 2012 kamen in Berlin rund 80 deutsche Vertreter aus Politik, der Zivilgesellschaft, aus Verbänden und der Wirtschaft zum 4. deutschen Internet Governance Forum in Berlin zusammen, um zu dem Thema "Das Verhältnis von Internet und den Grund- und Menschenrechten" die Stichpunkte aus deutscher Sicht für die Teilnahme in Baku zusammenzutragen.
  8. Choi, B.; Peng, X.: Dynamic and hierarchical classification of Web pages (2004) 0.02
    0.024613593 = product of:
      0.12306796 = sum of:
        0.04517013 = weight(_text_:web in 2555) [ClassicSimilarity], result of:
          0.04517013 = score(doc=2555,freq=10.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.48375595 = fieldWeight in 2555, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2555)
        0.07789783 = weight(_text_:log in 2555) [ClassicSimilarity], result of:
          0.07789783 = score(doc=2555,freq=2.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.42483553 = fieldWeight in 2555, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.046875 = fieldNorm(doc=2555)
      0.2 = coord(2/10)
    
    Abstract
    Automatic classification of Web pages is an effective way to organise the vast amount of information and to assist in retrieving relevant information from the Internet. Although many automatic classification systems have been proposed, most of them ignore the conflict between the fixed number of categories and the growing number of Web pages being added into the systems. They also require searching through all existing categories to make any classification. This article proposes a dynamic and hierarchical classification system that is capable of adding new categories as required, organising the Web pages into a tree structure, and classifying Web pages by searching through only one path of the tree. The proposed single-path search technique reduces the search complexity from (n) to (log(n)). Test results show that the system improves the accuracy of classification by 6 percent in comparison to related systems. The dynamic-category expansion technique also achieves satisfying results for adding new categories into the system as required.
  9. Huang, C.-K.; Chien, L.-F.; Oyang, Y.-J.: Relevant term suggestion in interactive Web search based on contextual information in query session logs (2003) 0.02
    0.024192134 = product of:
      0.12096067 = sum of:
        0.029157192 = weight(_text_:web in 1612) [ClassicSimilarity], result of:
          0.029157192 = score(doc=1612,freq=6.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.3122631 = fieldWeight in 1612, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1612)
        0.091803476 = weight(_text_:log in 1612) [ClassicSimilarity], result of:
          0.091803476 = score(doc=1612,freq=4.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.5006735 = fieldWeight in 1612, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1612)
      0.2 = coord(2/10)
    
    Abstract
    This paper proposes an effective term suggestion approach to interactive Web search. Conventional approaches to making term suggestions involve extracting co-occurring keyterms from highly ranked retrieved documents. Such approaches must deal with term extraction difficulties and interference from irrelevant documents, and, more importantly, have difficulty extracting terms that are conceptually related but do not frequently co-occur in documents. In this paper, we present a new, effective log-based approach to relevant term extraction and term suggestion. Using this approach, the relevant terms suggested for a user query are those that cooccur in similar query sessions from search engine logs, rather than in the retrieved documents. In addition, the suggested terms in each interactive search step can be organized according to its relevance to the entire query session, rather than to the most recent single query as in conventional approaches. The proposed approach was tested using a proxy server log containing about two million query transactions submitted to search engines in Taiwan. The obtained experimental results show that the proposed approach can provide organized and highly relevant terms, and can exploit the contextual information in a user's query session to make more effective suggestions.
    Footnote
    Teil eines Themenheftes: "Web retrieval and mining: A machine learning perspective"
  10. Polleres, A.; Lausen, H.; Lara, R.: Semantische Beschreibung von Web Services (2006) 0.02
    0.024163514 = product of:
      0.12081757 = sum of:
        0.05846389 = weight(_text_:kommunikation in 5813) [ClassicSimilarity], result of:
          0.05846389 = score(doc=5813,freq=2.0), product of:
            0.14706601 = queryWeight, product of:
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.028611459 = queryNorm
            0.39753503 = fieldWeight in 5813, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5813)
        0.062353685 = weight(_text_:web in 5813) [ClassicSimilarity], result of:
          0.062353685 = score(doc=5813,freq=14.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.6677857 = fieldWeight in 5813, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5813)
      0.2 = coord(2/10)
    
    Abstract
    In diesem Kapitel werden Anwendungsgebiete und Ansätze für die semantische Beschreibung von Web Services behandelt. Bestehende Web Service Technologien leisten einen entscheidenden Beitrag zur Entwicklung verteilter Anwendungen dadurch, dass weithin akzeptierte Standards vorliegen, die die Kommunikation zwischen Anwendungen bestimmen und womit deren Kombination zu komplexeren Einheiten ermöglicht wird. Automatisierte Mechanismen zum Auffinden geeigneter Web Services und deren Komposition dagegen werden von bestehenden Technologien in vergleichsweise geringem Maß unterstützt. Ähnlich wie bei der Annotation statischer Daten im "Semantic Web" setzen Forschung und Industrie große Hoffnungen in die semantische Beschreibung von Web Services zur weitgehenden Automatisierung dieser Aufgaben.
    Source
    Semantic Web: Wege zur vernetzten Wissensgesellschaft. Hrsg.: T. Pellegrini, u. A. Blumauer
  11. Sixth International World Wide Web Conference (1997) 0.02
    0.023888204 = product of:
      0.11944102 = sum of:
        0.057136193 = weight(_text_:web in 2053) [ClassicSimilarity], result of:
          0.057136193 = score(doc=2053,freq=4.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.6119082 = fieldWeight in 2053, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=2053)
        0.062304825 = product of:
          0.09345724 = sum of:
            0.046939746 = weight(_text_:29 in 2053) [ClassicSimilarity], result of:
              0.046939746 = score(doc=2053,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.46638384 = fieldWeight in 2053, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2053)
            0.04651749 = weight(_text_:22 in 2053) [ClassicSimilarity], result of:
              0.04651749 = score(doc=2053,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.46428138 = fieldWeight in 2053, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2053)
          0.6666667 = coord(2/3)
      0.2 = coord(2/10)
    
    Content
    Papers from the 6th International World Wide Web conference, held 7-11 Apr 1997, Santa Clara, California
    Date
    1. 8.1996 22:08:06
    Source
    Computer networks and ISDN systems. 29(1997) no.8, S.865-1542
  12. Beck, K.: Zur Bildungsfunktion computervermittelter Kommunikation (2001) 0.02
    0.023145929 = product of:
      0.115729645 = sum of:
        0.10022382 = weight(_text_:kommunikation in 6589) [ClassicSimilarity], result of:
          0.10022382 = score(doc=6589,freq=2.0), product of:
            0.14706601 = queryWeight, product of:
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.028611459 = queryNorm
            0.68148863 = fieldWeight in 6589, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.09375 = fieldNorm(doc=6589)
        0.015505831 = product of:
          0.04651749 = sum of:
            0.04651749 = weight(_text_:22 in 6589) [ClassicSimilarity], result of:
              0.04651749 = score(doc=6589,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.46428138 = fieldWeight in 6589, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6589)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Date
    3.10.2001 15:40:22
  13. Thelwall, M.; Vaughan, L.; Björneborn, L.: Webometrics (2004) 0.02
    0.02308332 = product of:
      0.115416594 = sum of:
        0.050501734 = weight(_text_:web in 4279) [ClassicSimilarity], result of:
          0.050501734 = score(doc=4279,freq=18.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.5408555 = fieldWeight in 4279, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4279)
        0.06491486 = weight(_text_:log in 4279) [ClassicSimilarity], result of:
          0.06491486 = score(doc=4279,freq=2.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.3540296 = fieldWeight in 4279, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4279)
      0.2 = coord(2/10)
    
    Abstract
    Webometrics, the quantitative study of Web-related phenomena, emerged from the realization that methods originally designed for bibliometric analysis of scientific journal article citation patterns could be applied to the Web, with commercial search engines providing the raw data. Almind and Ingwersen (1997) defined the field and gave it its name. Other pioneers included Rodriguez Gairin (1997) and Aguillo (1998). Larson (1996) undertook exploratory link structure analysis, as did Rousseau (1997). Webometrics encompasses research from fields beyond information science such as communication studies, statistical physics, and computer science. In this review we concentrate on link analysis, but also cover other aspects of webometrics, including Web log fle analysis. One theme that runs through this chapter is the messiness of Web data and the need for data cleansing heuristics. The uncontrolled Web creates numerous problems in the interpretation of results, for instance, from the automatic creation or replication of links. The loose connection between top-level domain specifications (e.g., com, edu, and org) and their actual content is also a frustrating problem. For example, many .com sites contain noncommercial content, although com is ostensibly the main commercial top-level domain. Indeed, a skeptical researcher could claim that obstacles of this kind are so great that all Web analyses lack value. As will be seen, one response to this view, a view shared by critics of evaluative bibliometrics, is to demonstrate that Web data correlate significantly with some non-Web data in order to prove that the Web data are not wholly random. A practical response has been to develop increasingly sophisticated data cleansing techniques and multiple data analysis methods.
  14. fwt: Geheime Zeichen der Vernetzung : Web-Erfinder Tim Berners-Lee plant eine neue 'Revolution' (2001) 0.02
    0.02301428 = product of:
      0.07671426 = sum of:
        0.033407938 = weight(_text_:kommunikation in 5928) [ClassicSimilarity], result of:
          0.033407938 = score(doc=5928,freq=2.0), product of:
            0.14706601 = queryWeight, product of:
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.028611459 = queryNorm
            0.22716287 = fieldWeight in 5928, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.03125 = fieldNorm(doc=5928)
        0.038090795 = weight(_text_:web in 5928) [ClassicSimilarity], result of:
          0.038090795 = score(doc=5928,freq=16.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.4079388 = fieldWeight in 5928, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=5928)
        0.0052155275 = product of:
          0.015646582 = sum of:
            0.015646582 = weight(_text_:29 in 5928) [ClassicSimilarity], result of:
              0.015646582 = score(doc=5928,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.15546128 = fieldWeight in 5928, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5928)
          0.33333334 = coord(1/3)
      0.3 = coord(3/10)
    
    Abstract
    Das World Wide Web soll mehr leisten - das ist Ziel von WWW-Erfinder Tim Berners-Lee. Seine Strategie dabei: die Dokumente im Datennetz vermehrt mit Zusatzinformationen ausstatten, die Suchmaschinen auswerten können. Die meisten Webseiten sind mit HTMLBefehlen programmiert, der Hyper Text Markup Language. Der Code enthält den Text, der beim Abruf auf dem Bildschirm stehen soll, und weitere versteckte Informationen. Dazu zählen Verweise zu anderen Texten, Bildern oder Filmen und Angaben über den Inhalt der Seite. Dank dieser technischen Zusätze können beispielsweise Suchmaschinen Homepages automatisch in ihren Bestand aufnehmen. die Dokumente im World Wide Web sollen weit mehr verborgene Informationen kennen als bisher, erläutert Berners-Lee in der britischen Wissenschaftszeitschrift Nature (Band 410, Seite 1023). Eine Forschungsarbeit könne beispielsweise die Messergebnisse eines beschriebenen Versuchs und die dazu verwendeten Materialien maschinenlesbar kennzeichnen. Das Dokument würde sich und seinen Inhalt für andere Computer detailliert beschreiben und eine Kommunikation von einem zum anderen Rechner erleichtern. Intelligente Suchmaschinen könnten die universellen Zusatzinformationen schnell lesen, auswerten - und gezielt Fragen beantworten. Nutzer dürften dann RechercheAufträge geben wie: "Finde alle Dokumente heraus, die die Untersuchung der Erbsubstanz DNA mit Hilfe der Substanz Calcium beschreiben." Die für den Sucherfolg maßgeblichen zusätzlichen Informationen würden nicht am Bildschirm auftauchen und die Leser verwirren, sondern blieben - wie bisher die Verweise auf andere Seiten im Web - weitgehend unsichtbar im Hintergrund. "Wir sind in den frühen Tagen einer neuen Web-Revolution, die tief greifende Auswirkungen auf das Publizieren im Web und die Natur des Netzes haben wird", ist Berners-Lee optimistisch. Er hatte das World Wide Web - den grafischen und mit der Maus zu bedienenden Teil des Internets - zu Beginn der neunziger Jahre am Europäischen Zentrum für Teilchenphysik (Cern) in Genf erfunden. Seither hat es sich mit rasender Geschwindigkeit verbreitet und ist zum Schrittmacher für eine ganze Branche geworden. Heute ist Berners-Lee Direktor des World Wide Web Consortium (W3C), das Standards für das WWW entwickelt. Berners-Lee skizziert mögliche Auswirkungen des semantischen Computernetzes auf naturwissenschaftliche Forschungsarbeiten: Wissenschaftler könnten ihre Ergebnisse außerhalb eines Fachartikels publizieren. Ein zugelassener Kollegenkreis hätte Zugriff und könnte Anregungen für das weitere Vorgehen geben ohne auf die gedruckte Version der Arbeit in einem Fachjournal warten zu müssen. Ist das Science-Fiction? Berners-Lee hält dagegen. Wer hat vor einem Jahrzehnt geglaubt, dass ein computergestütztes Netz von Texten die 200 Jahre alte Tradition des akademischen Publizierens herausfordern würde?«
    Source
    Frankfurter Rundschau. Nr.123 vom 29.5.2001, S.29
  15. Marchionini, G.: Co-evolution of user and organizational interfaces : a longitudinal case study of WWW dissemination of national statistics (2002) 0.02
    0.022889657 = product of:
      0.11444829 = sum of:
        0.023567477 = weight(_text_:web in 1252) [ClassicSimilarity], result of:
          0.023567477 = score(doc=1252,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.25239927 = fieldWeight in 1252, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1252)
        0.09088081 = weight(_text_:log in 1252) [ClassicSimilarity], result of:
          0.09088081 = score(doc=1252,freq=2.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.49564147 = fieldWeight in 1252, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1252)
      0.2 = coord(2/10)
    
    Abstract
    The data systems, policies and procedures, corporate culture, and public face of an agency or institution make up its organizational interface. This case study describes how user interfaces for the Bureau of Labor Statistics web site evolved over a 5-year period along with the [arger organizational interface and how this co-evolution has influenced the institution itself. Interviews with BLS staff and transaction log analysis are the foci in this analysis that also included user informationseeking studies and user interface prototyping and testing. The results are organized into a model of organizational interface change and related to the information life cycle.
  16. Goodman, J.; Heckerman, D.; Rounthwaite, R.: Schutzwälle gegen Spam (2005) 0.02
    0.021966483 = product of:
      0.07322161 = sum of:
        0.033407938 = weight(_text_:kommunikation in 3696) [ClassicSimilarity], result of:
          0.033407938 = score(doc=3696,freq=2.0), product of:
            0.14706601 = queryWeight, product of:
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.028611459 = queryNorm
            0.22716287 = fieldWeight in 3696, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.03125 = fieldNorm(doc=3696)
        0.019045398 = weight(_text_:web in 3696) [ClassicSimilarity], result of:
          0.019045398 = score(doc=3696,freq=4.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.2039694 = fieldWeight in 3696, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=3696)
        0.020768277 = product of:
          0.031152414 = sum of:
            0.015646582 = weight(_text_:29 in 3696) [ClassicSimilarity], result of:
              0.015646582 = score(doc=3696,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.15546128 = fieldWeight in 3696, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3696)
            0.015505832 = weight(_text_:22 in 3696) [ClassicSimilarity], result of:
              0.015505832 = score(doc=3696,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.15476047 = fieldWeight in 3696, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3696)
          0.6666667 = coord(2/3)
      0.3 = coord(3/10)
    
    Abstract
    Täglich überfluten Milliarden lästiger Werbe-E-Mails - Spam - das Internet. sie belasten die weltweiter Kommunikation. Wie lässt sich dieses Ärgernis eindämmen?
    Die erste Spam wurde 1978 an 400 Empfänger im Arpanet geschickt. Absender war ein Mitarbeiter der PR-Abtei lung von Digital Equipment Corporation (Dec), der darin für den damals neuen Decsystem-20-Rechner der Firma warb. Heute macht Spam mehr als zwei Drittel der über das Internet versandten E-Mails aus; täglich werden mehrere Milliarden solcher unverlangten Werbebotschaften versandt. Ein Drittel aller EMail-Nutzer hat mehr als 80 Prozent Spam in der elektronischen Post. Seit einiger Zeit sorgt Spam zudem durch so genannte Phishing-Attacken für Ärger: Dabei werden gefälschte E-Mails, die Fake-E-Mails, verschickt, die scheinbar von Mitarbeitern großer, Vertrauen erweckender Institutionen stammen, tatsächlich aber von Betrügern kommen - um damit Kreditkartennummern oder andere persönliche Informationen auszuspionieren. Nach einer Studie von Gartner Research von 2004 verursachen Phishing-Attacken in den USA Schäden in Höhe von jährlich 1,2 Milliarden Dollar. Spammer nutzen nicht nur E-Mail. In Chatrooms warten »Roboter«, die sich als Menschen ausgeben und Leute zu Klicks auf pornografische Webseiten verleiten sollen. Nutzer von Instant-Messaging-Systemen (IM) bekommen es mit so genannten splMs zu tun, engen »Verwandten« des Spams. In Web-Blogs (Web-Tagebüchern) lauern »Link-Spammer«, welche die Arbeit von Internetsuchmaschinen manipulieren, indem sie unerwünschte Links hinzufügen - was die Nutzung von Webseiten und Links erschwert. Spam steht teilweise in dem Ruf, die Internetkommunikation zu behindern oder gar zum Erliegen zu bringen. Die Wirklichkeit sieht indes nicht ganz so schwarz aus. Softwareentwickler haben verschiedene Techniken ersonnen, Spam auszufiltern und Spammern das Handwerk zu erschweren - weitere werden in Labors erprobt. Die hier vorgestellten Methoden richten sich gegen Junk-EMail, könnten aber auch zur Eindämmung anderer Spamvarianten verwendet werden. Keine dieser Techniken vermag Wunder zu vollbringen, doch ihre Kombination - sofern möglichst viele User sie anwenden - verspricht zumindest deutliche Verbesserungen. Es ist nicht unrealistisch zu hoffen, dass unsere E-MailPostfächer eines Tages wieder nahezu frei von Spam sein werden.
    Date
    31.12.1996 19:29:41
    18. 7.2005 11:07:22
  17. Hinich, M.J.; Molyneux, R.E.: Predicting information flows in network traffic (2003) 0.02
    0.021727478 = product of:
      0.10863739 = sum of:
        0.016833913 = weight(_text_:web in 5155) [ClassicSimilarity], result of:
          0.016833913 = score(doc=5155,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.18028519 = fieldWeight in 5155, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5155)
        0.091803476 = weight(_text_:log in 5155) [ClassicSimilarity], result of:
          0.091803476 = score(doc=5155,freq=4.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.5006735 = fieldWeight in 5155, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5155)
      0.2 = coord(2/10)
    
    Abstract
    Hinich and Molyneux review the literature of internet measurement and note three results consistently to be found in network traffic studies. These are "self-similarity," "long-range dependence," by which is meant that events in one time are correlated with events in a previous time and remain so through longer time periods than expected, and "heavy tails" by which they mean many small connections with low byte counts and a few long connections with large byte counts. The literature also suggests that conventional time series analysis is not helpful for network analysis. Using a single day's traffic at the Berkeley National Labs web server, cumulated TCP flows were collected, log transforms were used with the adding of .01 to all values allowing log transforms of the zero values, and providing a distribution that overcomes the heavy tail problem. However, Hinich's bicorrelation test for nonlinearity using overlapping moving windows found strong evidence of nonlinear structures. Time series analysis assumes linear systems theory and thus additivity and scalability. Spectral analysis should provide large peaks at the lowest frequencies if long range dependence is present since the power spectrum would go to infinity if the frequency goes to zero. This does not occur and so long range dependence must be questioned, at least until it is determined what effect other OSI layers may have on the TCP data.
  18. Nicholas, D.: Assessing information needs : tools, techniques and concepts for the Internet age (2000) 0.02
    0.021293186 = product of:
      0.10646593 = sum of:
        0.028568096 = weight(_text_:web in 1745) [ClassicSimilarity], result of:
          0.028568096 = score(doc=1745,freq=4.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.3059541 = fieldWeight in 1745, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1745)
        0.07789783 = weight(_text_:log in 1745) [ClassicSimilarity], result of:
          0.07789783 = score(doc=1745,freq=2.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.42483553 = fieldWeight in 1745, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.046875 = fieldNorm(doc=1745)
      0.2 = coord(2/10)
    
    Abstract
    This work tackles one of the fundamental problems of information management - how to get the right information to the right person at the right time. It provides a practical framework to enable information services to gather information from users in order to aid information system design, and to monitor the effectiveness of an information service. This new edition has been fully revised and now has increased coverage of the Internet. The Web raises many problems when it comes to meeting information needs - authority and overload, for example - and these problems make an effective information needs analysis even more crucial. There is a new methodology section on Web log analysis and focus group interviews. Practical advice is given concerning interview technique and an interview schedule is included.
  19. Jansen, B.J.; Resnick, M.: ¬An examination of searcher's perceptions of nonsponsored and sponsored links during ecommerce Web searching (2006) 0.02
    0.02122987 = product of:
      0.10614935 = sum of:
        0.041234493 = weight(_text_:web in 221) [ClassicSimilarity], result of:
          0.041234493 = score(doc=221,freq=12.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.4416067 = fieldWeight in 221, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=221)
        0.06491486 = weight(_text_:log in 221) [ClassicSimilarity], result of:
          0.06491486 = score(doc=221,freq=2.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.3540296 = fieldWeight in 221, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.0390625 = fieldNorm(doc=221)
      0.2 = coord(2/10)
    
    Abstract
    In this article, we report results of an investigation into the effect of sponsored links on ecommerce information seeking on the Web. In this research, 56 participants each engaged in six ecommerce Web searching tasks. We extracted these tasks from the transaction log of a Web search engine, so they represent actual ecommerce searching information needs. Using 60 organic and 30 sponsored Web links, the quality of the Web search engine results was controlled by switching nonsponsored and sponsored links on half of the tasks for each participant. This allowed for investigating the bias toward sponsored links while controlling for quality of content. The study also investigated the relationship between searching self-efficacy, searching experience, types of ecommerce information needs, and the order of links on the viewing of sponsored links. Data included 2,453 interactions with links from result pages and 961 utterances evaluating these links. The results of the study indicate that there is a strong preference for nonsponsored links, with searchers viewing these results first more than 82% of the time. Searching self-efficacy and experience does not increase the likelihood of viewing sponsored links, and the order of the result listing does not appear to affect searcher evaluation of sponsored links. The implications for sponsored links as a long-term business model are discussed.
  20. Krämer, S.: Kommunikation im Internet (1997) 0.02
    0.020984596 = product of:
      0.10492298 = sum of:
        0.09449192 = weight(_text_:kommunikation in 6286) [ClassicSimilarity], result of:
          0.09449192 = score(doc=6286,freq=4.0), product of:
            0.14706601 = queryWeight, product of:
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.028611459 = queryNorm
            0.64251363 = fieldWeight in 6286, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.0625 = fieldNorm(doc=6286)
        0.010431055 = product of:
          0.031293165 = sum of:
            0.031293165 = weight(_text_:29 in 6286) [ClassicSimilarity], result of:
              0.031293165 = score(doc=6286,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.31092256 = fieldWeight in 6286, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6286)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    Das Projekt einer Künstlichen Intelligenz verliert seine suggestive Kraft. Doch die visionäre Lücke, die damit entsteht, bleibt nicht unbesetzt. Eine neue Utopie zeichnet sich ab. Sie handelt nicht vom Computer als einem Werkzeug des Denkens, vielmehr vom Computer als einem Medium der Kommunikation. Es geht um eine Verbindung von Datenverarbeitung und Telekommunikation, welche die Einseitigkeit der Fernkommunikation zu überwinden erlaubt. Das Versprechen ist dabei, daß die wechselseitige Bezugnahme und Anschließbarkeit, die wir kennen aus den mündlichen Gesprächen zwischen anwesenden Personen, nun auch unter den Bedingungen einer Abwesenheit der Kommunizierenden technisch realisierbar werde. 'Interaktivität' wird hierbei zu einer Schlüsselkategorie
    Date
    29. 1.1997 18:49:05

Years

Languages

Types

  • a 1467
  • m 209
  • s 78
  • el 58
  • x 12
  • r 4
  • b 3
  • i 3
  • More… Less…

Subjects

Classifications