Search (112 results, page 1 of 6)

  • × year_i:[2010 TO 2020}
  • × theme_ss:"Internet"
  1. Egbert, J.; Biber, D.; Davies, M.: Developing a bottom-up, user-based method of web register classification (2015) 0.07
    0.07319553 = product of:
      0.14639106 = sum of:
        0.14639106 = sum of:
          0.10403923 = weight(_text_:web in 2158) [ClassicSimilarity], result of:
            0.10403923 = score(doc=2158,freq=16.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.6119082 = fieldWeight in 2158, product of:
                4.0 = tf(freq=16.0), with freq of:
                  16.0 = termFreq=16.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.046875 = fieldNorm(doc=2158)
          0.042351827 = weight(_text_:22 in 2158) [ClassicSimilarity], result of:
            0.042351827 = score(doc=2158,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.23214069 = fieldWeight in 2158, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2158)
      0.5 = coord(1/2)
    
    Abstract
    This paper introduces a project to develop a reliable, cost-effective method for classifying Internet texts into register categories, and apply that approach to the analysis of a large corpus of web documents. To date, the project has proceeded in 2 key phases. First, we developed a bottom-up method for web register classification, asking end users of the web to utilize a decision-tree survey to code relevant situational characteristics of web documents, resulting in a bottom-up identification of register and subregister categories. We present details regarding the development and testing of this method through a series of 10 pilot studies. Then, in the second phase of our project we applied this procedure to a corpus of 53,000 web documents. An analysis of the results demonstrates the effectiveness of these methods for web register classification and provides a preliminary description of the types and distribution of registers on the web.
    Date
    4. 8.2015 19:22:04
  2. Social Media und Web Science : das Web als Lebensraum, Düsseldorf, 22. - 23. März 2012, Proceedings, hrsg. von Marlies Ockenfeld, Isabella Peters und Katrin Weller. DGI, Frankfurt am Main 2012 (2012) 0.07
    0.06761923 = product of:
      0.13523845 = sum of:
        0.13523845 = sum of:
          0.08582799 = weight(_text_:web in 1517) [ClassicSimilarity], result of:
            0.08582799 = score(doc=1517,freq=8.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.50479853 = fieldWeight in 1517, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1517)
          0.049410466 = weight(_text_:22 in 1517) [ClassicSimilarity], result of:
            0.049410466 = score(doc=1517,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.2708308 = fieldWeight in 1517, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1517)
      0.5 = coord(1/2)
    
    RSWK
    Soziale Software / World Wide Web 2.0 / Kongress / Düsseldorf <2012>
    Subject
    Soziale Software / World Wide Web 2.0 / Kongress / Düsseldorf <2012>
  3. Firnkes, M.: Schöne neue Welt : der Content der Zukunft wird von Algorithmen bestimmt (2015) 0.05
    0.053031296 = product of:
      0.10606259 = sum of:
        0.10606259 = sum of:
          0.063710764 = weight(_text_:web in 2118) [ClassicSimilarity], result of:
            0.063710764 = score(doc=2118,freq=6.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.37471575 = fieldWeight in 2118, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.046875 = fieldNorm(doc=2118)
          0.042351827 = weight(_text_:22 in 2118) [ClassicSimilarity], result of:
            0.042351827 = score(doc=2118,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.23214069 = fieldWeight in 2118, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2118)
      0.5 = coord(1/2)
    
    Abstract
    Während das Internet vor noch nicht allzu langer Zeit hauptsächlich ein weiteres Informationsmedium darstellte, so explodieren die technischen Möglichkeiten derzeit förmlich. Diese stärken nicht nur den gegenseitigen Austausch der Nutzer. Sie alle vermessen unsere täglichen Gewohnheiten - auf sehr vielfältige Art und Weise. Die Mechanismen, die das gekaufte Web ausmachen, werden hierdurch komplexer. In den meisten neuen Technologien und Anwendungen verbergen sich Wege, die Verbraucherverführung zu perfektionieren. Nicht wenige davon dürften zudem für die Politik und andere Interessensverbände von Bedeutung sein, als alternativer Kanal, um Wählergruppen und Unterstützer zu mobilisieren. Das nachfolgende Kapitel nennt die wichtigsten Trends der nächsten Jahre, mitsamt ihren möglichen manipulativen Auswirkungen. Nur wenn wir beobachten, von wem die Zukunftstechniken wie genutzt werden, können wir kommerziellen Auswüchsen vorbeugen.
    Content
    Mit Verweis auf das Buch: Firnkes, M.: Das gekaufte Web: wie wir online manipuliert werden. Hannover : Heise Zeitschriften Verlag 2015. 220 S.
    Date
    5. 7.2015 22:02:31
    Theme
    Semantic Web
  4. Joint, N.: Web 2.0 and the library : a transformational technology? (2010) 0.05
    0.04879702 = product of:
      0.09759404 = sum of:
        0.09759404 = sum of:
          0.06935949 = weight(_text_:web in 4202) [ClassicSimilarity], result of:
            0.06935949 = score(doc=4202,freq=16.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.4079388 = fieldWeight in 4202, product of:
                4.0 = tf(freq=16.0), with freq of:
                  16.0 = termFreq=16.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.03125 = fieldNorm(doc=4202)
          0.028234553 = weight(_text_:22 in 4202) [ClassicSimilarity], result of:
            0.028234553 = score(doc=4202,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.15476047 = fieldWeight in 4202, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=4202)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - This paper is the final one in a series which has tried to give an overview of so-called transformational areas of digital library technology. The aim has been to assess how much real transformation these applications can bring about, in terms of creating genuine user benefit and also changing everyday library practice. Design/methodology/approach - The paper provides a summary of some of the legal and ethical issues associated with web 2.0 applications in libraries, associated with a brief retrospective view of some relevant literature. Findings - Although web 2.0 innovations have had a massive impact on the larger World Wide Web, the practical impact on library service delivery has been limited to date. What probably can be termed transformational in the effect of web 2.0 developments on library and information work is their effect on some underlying principles of professional practice. Research limitations/implications - The legal and ethical challenges of incorporating web 2.0 platforms into mainstream institutional service delivery need to be subject to further research, so that the risks associated with these innovations are better understood at the strategic and policy-making level. Practical implications - This paper makes some recommendations about new principles of library and information practice which will help practitioners make better sense of these innovations in their overall information environment. Social implications - The paper puts in context some of the more problematic social impacts of web 2.0 innovations, without denying the undeniable positive contribution of social networking to the sphere of human interactivity. Originality/value - This paper raises some cautionary points about web 2.0 applications without adopting a precautionary approach of total prohibition. However, none of the suggestions or analysis in this piece should be considered to constitute legal advice. If such advice is required, the reader should consult appropriate legal professionals.
    Date
    22. 1.2011 17:54:04
  5. Oguz, F.; Koehler, W.: URL decay at year 20 : a research note (2016) 0.05
    0.046162233 = product of:
      0.092324466 = sum of:
        0.092324466 = sum of:
          0.042913996 = weight(_text_:web in 2651) [ClassicSimilarity], result of:
            0.042913996 = score(doc=2651,freq=2.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.25239927 = fieldWeight in 2651, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2651)
          0.049410466 = weight(_text_:22 in 2651) [ClassicSimilarity], result of:
            0.049410466 = score(doc=2651,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.2708308 = fieldWeight in 2651, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2651)
      0.5 = coord(1/2)
    
    Abstract
    All text is ephemeral. Some texts are more ephemeral than others. The web has proved to be among the most ephemeral and changing of information vehicles. The research note revisits Koehler's original data set after about 20 years since it was first collected. By late 2013, the number of URLs responding to a query had fallen to 1.6% of the original sample. A query of the 6 remaining URLs in February 2015 showed only 2 still responding.
    Date
    22. 1.2016 14:37:14
  6. Dalip, D.H.; Gonçalves, M.A.; Cristo, M.; Calado, P.: ¬A general multiview framework for assessing the quality of collaboratively created content on web 2.0 (2017) 0.04
    0.039321437 = product of:
      0.078642875 = sum of:
        0.078642875 = sum of:
          0.04334968 = weight(_text_:web in 3343) [ClassicSimilarity], result of:
            0.04334968 = score(doc=3343,freq=4.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.25496176 = fieldWeight in 3343, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3343)
          0.03529319 = weight(_text_:22 in 3343) [ClassicSimilarity], result of:
            0.03529319 = score(doc=3343,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.19345059 = fieldWeight in 3343, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3343)
      0.5 = coord(1/2)
    
    Date
    16.11.2017 13:04:22
    Object
    Web 2.0
  7. Stuart, D.: Web metrics for library and information professionals (2014) 0.03
    0.034347955 = product of:
      0.06869591 = sum of:
        0.06869591 = product of:
          0.13739182 = sum of:
            0.13739182 = weight(_text_:web in 2274) [ClassicSimilarity], result of:
              0.13739182 = score(doc=2274,freq=82.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.808072 = fieldWeight in 2274, product of:
                  9.055386 = tf(freq=82.0), with freq of:
                    82.0 = termFreq=82.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2274)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This is a practical guide to using web metrics to measure impact and demonstrate value. The web provides an opportunity to collect a host of different metrics, from those associated with social media accounts and websites to more traditional research outputs. This book is a clear guide for library and information professionals as to what web metrics are available and how to assess and use them to make informed decisions and demonstrate value. As individuals and organizations increasingly use the web in addition to traditional publishing avenues and formats, this book provides the tools to unlock web metrics and evaluate the impact of this content. The key topics covered include: bibliometrics, webometrics and web metrics; data collection tools; evaluating impact on the web; evaluating social media impact; investigating relationships between actors; exploring traditional publications in a new environment; web metrics and the web of data; the future of web metrics and the library and information professional. The book will provide a practical introduction to web metrics for a wide range of library and information professionals, from the bibliometrician wanting to demonstrate the wider impact of a researcher's work than can be demonstrated through traditional citations databases, to the reference librarian wanting to measure how successfully they are engaging with their users on Twitter. It will be a valuable tool for anyone who wants to not only understand the impact of content, but demonstrate this impact to others within the organization and beyond.
    Content
    1. Introduction. MetricsIndicators -- Web metrics and Ranganathan's laws of library science -- Web metrics for the library and information professional -- The aim of this book -- The structure of the rest of this book -- 2. Bibliometrics, webometrics and web metrics. Web metrics -- Information science metrics -- Web analytics -- Relational and evaluative metrics -- Evaluative web metrics -- Relational web metrics -- Validating the results -- 3. Data collection tools. The anatomy of a URL, web links and the structure of the web -- Search engines 1.0 -- Web crawlers -- Search engines 2.0 -- Post search engine 2.0: fragmentation -- 4. Evaluating impact on the web. Websites -- Blogs -- Wikis -- Internal metrics -- External metrics -- A systematic approach to content analysis -- 5. Evaluating social media impact. Aspects of social network sites -- Typology of social network sites -- Research and tools for specific sites and services -- Other social network sites -- URL shorteners: web analytic links on any site -- General social media impact -- Sentiment analysis -- 6. Investigating relationships between actors. Social network analysis methods -- Sources for relational network analysis -- 7. Exploring traditional publications in a new environment. More bibliographic items -- Full text analysis -- Greater context -- 8. Web metrics and the web of data. The web of data -- Building the semantic web -- Implications of the web of data for web metrics -- Investigating the web of data today -- SPARQL -- Sindice -- LDSpider: an RDF web crawler -- 9. The future of web metrics and the library and information professional. How far we have come -- The future of web metrics -- The future of the library and information professional and web metrics.
    RSWK
    Bibliothek / World Wide Web / World Wide Web 2.0 / Analyse / Statistik
    Bibliometrie / Semantic Web / Soziale Software
    Subject
    Bibliothek / World Wide Web / World Wide Web 2.0 / Analyse / Statistik
    Bibliometrie / Semantic Web / Soziale Software
  8. Bünte, O.: Bundesdatenschutzbeauftragte bezweifelt Facebooks Datenschutzversprechen (2018) 0.03
    0.03297302 = product of:
      0.06594604 = sum of:
        0.06594604 = sum of:
          0.030652853 = weight(_text_:web in 4180) [ClassicSimilarity], result of:
            0.030652853 = score(doc=4180,freq=2.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.18028519 = fieldWeight in 4180, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4180)
          0.03529319 = weight(_text_:22 in 4180) [ClassicSimilarity], result of:
            0.03529319 = score(doc=4180,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.19345059 = fieldWeight in 4180, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4180)
      0.5 = coord(1/2)
    
    Date
    23. 3.2018 13:41:22
    Footnote
    Vgl. zum Hintergrund auch: https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election; https://www.nytimes.com/2018/03/18/us/cambridge-analytica-facebook-privacy-data.html; http://www.latimes.com/business/la-fi-tn-facebook-cambridge-analytica-sued-20180321-story.html; https://www.tagesschau.de/wirtschaft/facebook-cambridge-analytica-103.html; http://www.spiegel.de/netzwelt/web/cambridge-analytica-der-eigentliche-skandal-liegt-im-system-facebook-kolumne-a-1199122.html; http://www.spiegel.de/netzwelt/netzpolitik/cambridge-analytica-facebook-sieht-sich-im-datenskandal-als-opfer-a-1199095.html; https://www.heise.de/newsticker/meldung/Datenskandal-um-Cambridge-Analytica-Facebook-sieht-sich-als-Opfer-3999922.html.
  9. Yang, S.; Han, R.; Ding, J.; Song, Y.: ¬The distribution of Web citations (2012) 0.03
    0.031855382 = product of:
      0.063710764 = sum of:
        0.063710764 = product of:
          0.12742153 = sum of:
            0.12742153 = weight(_text_:web in 2735) [ClassicSimilarity], result of:
              0.12742153 = score(doc=2735,freq=24.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.7494315 = fieldWeight in 2735, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2735)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A substantial amount of research has focused on the persistence or availability of Web citations. The present study analyzes Web citation distributions. Web citations are defined as the mentions of the URLs of Web pages (Web resources) as references in academic papers. The present paper primarily focuses on the analysis of the URLs of Web citations and uses three sets of data, namely, Set 1 from the Humanities and Social Science Index in China (CSSCI, 1998-2009), Set 2 from the publications of two international computer science societies, Communications of the ACM and IEEE Computer (1995-1999), and Set 3 from the medical science database, MEDLINE, of the National Library of Medicine (1994-2006). Web citation distributions are investigated based on Web site types, Web page types, URL frequencies, URL depths, URL lengths, and year of article publication. Results show significant differences in the Web citation distributions among the three data sets. However, when the URLs of Web citations with the same hostnames are aggregated, the distributions in the three data sets are consistent with the power law (the Lotka function).
  10. Webwissenschaft : eine Einführung (2010) 0.03
    0.030652853 = product of:
      0.061305705 = sum of:
        0.061305705 = product of:
          0.12261141 = sum of:
            0.12261141 = weight(_text_:web in 2870) [ClassicSimilarity], result of:
              0.12261141 = score(doc=2870,freq=32.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.72114074 = fieldWeight in 2870, product of:
                  5.656854 = tf(freq=32.0), with freq of:
                    32.0 = termFreq=32.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2870)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Das World Wide Web unterscheidet sich strukturell erheblich von den traditionellen Medien und hat das Mediensystem von Grund auf verändert. Radikal sind die Auswirkungen der webtechnischen Innovation sowohl für die Medienlandschaft und die Gesellschaft als auch für diejenigen Wissenschaften, die sich mit Medien - deren Geschichte, Inhalten, Formen, Technik, Wirkungen usf. - befassen. In dieser Einführung werden vor diesem Hintergrund einerseits Forschungsfragen einer zukünftigen Webwissenschaft auf einer übergeordneten Ebene diskutiert, andererseits werden die Perspektiven der relevanten Bezugswissenschaften integriert.
    Content
    Inhalt: Ist das Web ein Medium? --Konrad Scherfer Warum und zu welchem Zweck benötigen wir eine Webwissenschaft? 31- Helmut Volpers 'Diese Site wird nicht mehr gewartet'. Medienanalytische Perspektiven in den Medienwechseln - Rainer Leschke Emergente Öffentlichkeit? Bausteine zu einer Theorie der Weböffentlichkeit - Christoph Ernst Das ICH im Web - Auswirkungen virtueller Identitäten auf soziale Beziehungen - Helmut Volpers / Karin Wunder Technikgeschichte des Webs - Tom Alby Visuelles Denken im Interaktions- und Webdesign - Cyrus Khazaeli Das fotografische Bild im Web - Anja Bohnhof / Kolja Kracht Qualität im Web - Interdisziplinäre Website-Bewertung - David Kratz Für eine neue Poesie der Neugier. Das Web verändert den Journalismus - nicht nur online - Mercedes Bunz Das Web braucht Spezialisten, keine Generalisten. Zur Notwendigkeit einer webspezifischen Professionalisierung in der Ausbildung - Petra Werner Online-Forschung im Web - Methodenschwerpunkte im Überblick - Simone Fühles-Ubach Im Spiel der Moden? - Das Web in der Wirtschaft, die Wirtschaft im Web - Jörg Hoewner Medizin im Web - Martina Waitz Das Web und das Medienrecht - Bernd Holznagel / Thorsten Ricke Suchmaschinenforschung im Kontext einer zukünftigen Webwissenschaft - Dirk Lewandowski
    RSWK
    World Wide Web / Medienwissenschaft / Kommunikationswissenschaft
    Web Site / Gestaltung
    Subject
    World Wide Web / Medienwissenschaft / Kommunikationswissenschaft
    Web Site / Gestaltung
  11. Perez, M.: Web 2.0 im Einsatz für die Wissenschaft (2010) 0.03
    0.030344777 = product of:
      0.060689554 = sum of:
        0.060689554 = product of:
          0.12137911 = sum of:
            0.12137911 = weight(_text_:web in 4848) [ClassicSimilarity], result of:
              0.12137911 = score(doc=4848,freq=16.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.71389294 = fieldWeight in 4848, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4848)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In diesem Artikel geht es darum, was Web 2.0 für die Wissenschaft bedeutet und welchen Nutzen Web 2.0-Dienste für Wissenschaftler haben. Im Rahmen dieses Themas wird eine Studie vorgestellt, bei der Wissenschaftler unterschiedlicher Fachbereiche unter anderem gefragt wurden, welche Web 2.0-Dienste sie kennen und warum sie Web 2.0-Dienste nutzen. Nach einer kurzen Einleitung zu Web 2.0 und dem bisherigen Forschungsstand folgen die Ergebnisse der Studie, die zeigen werden, dass Web 2.0-Dienste bekannt sind und für private Zwecke und zur Unterhaltung genutzt werden, sie sich allerdings noch nicht als Werkzeuge für die Wissenschaft etabliert haben.
    Object
    Web 2.0
  12. Brügger, N.: ¬The archived Web : doing history in the digital age (2018) 0.03
    0.03003354 = product of:
      0.06006708 = sum of:
        0.06006708 = product of:
          0.12013416 = sum of:
            0.12013416 = weight(_text_:web in 5679) [ClassicSimilarity], result of:
              0.12013416 = score(doc=5679,freq=48.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.70657074 = fieldWeight in 5679, product of:
                  6.928203 = tf(freq=48.0), with freq of:
                    48.0 = termFreq=48.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5679)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    An original methodological framework for approaching the archived web, both as a source and as an object of study in its own right. As life continues to move online, the web becomes increasingly important as a source for understanding the past. But historians have yet to formulate a methodology for approaching the archived web as a source of study. How should the history of the present be written? In this book, Niels Brügger offers an original methodological framework for approaching the web of the past, both as a source and as an object of study in its own right. While many studies of the web focus solely on its use and users, Brügger approaches the archived web as a semiotic, textual system in order to offer the first book-length treatment of its scholarly use. While the various forms of the archived web can challenge researchers' interactions with it, they also present a range of possibilities for interpretation. The Archived Web identifies characteristics of the online web that are significant now for scholars, investigates how the online web became the archived web, and explores how the particular digitality of the archived web can affect a historian's research process. Brügger offers suggestions for how to translate traditional historiographic methods for the study of the archived web, focusing on provenance, creating an overview of the archived material, evaluating versions, and citing the material. The Archived Web lays the foundations for doing web history in the digital age, offering important and timely guidance for today's media scholars and tomorrow's historians.
    Content
    "How will the history of the present be written? As life continues to move online, the web becomes ever more important for an understanding of the past. This book offers an original theoretical framework for approaching the web of the past, both as a source and as an object of study in its own right"
    LCSH
    Web archives / Social aspects
    World Wide Web / History
    RSWK
    World Wide Web / Archivierung / Digitalisierung / Geschichtswissenschaft / Geschichtsschreibung
    Subject
    World Wide Web / Archivierung / Digitalisierung / Geschichtswissenschaft / Geschichtsschreibung
    Web archives / Social aspects
    World Wide Web / History
  13. Spink, A.; Danby, S.; Mallan, K.; Butler, C.: Exploring young children's web searching and technoliteracy (2010) 0.03
    0.026546149 = product of:
      0.053092297 = sum of:
        0.053092297 = product of:
          0.106184594 = sum of:
            0.106184594 = weight(_text_:web in 3623) [ClassicSimilarity], result of:
              0.106184594 = score(doc=3623,freq=24.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.6245262 = fieldWeight in 3623, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3623)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - This paper aims to report findings from an exploratory study investigating the web interactions and technoliteracy of children in the early childhood years. Previous research has studied aspects of older children's technoliteracy and web searching; however, few studies have analyzed web search data from children younger than six years of age. Design/methodology/approach - The study explored the Google web searching and technoliteracy of young children who are enrolled in a "preparatory classroom" or kindergarten (the year before young children begin compulsory schooling in Queensland, Australia). Young children were video- and audio-taped while conducting Google web searches in the classroom. The data were qualitatively analysed to understand the young children's web search behaviour. Findings - The findings show that young children engage in complex web searches, including keyword searching and browsing, query formulation and reformulation, relevance judgments, successive searches, information multitasking and collaborative behaviours. The study results provide significant initial insights into young children's web searching and technoliteracy. Practical implications - The use of web search engines by young children is an important research area with implications for educators and web technologies developers. Originality/value - This is the first study of young children's interaction with a web search engine.
  14. Niesner, S.: ¬Die Nutzung bibliothekarischer Normdaten im Web am Beispiel von VIAF und Wikipedia (2015) 0.03
    0.026009807 = product of:
      0.052019615 = sum of:
        0.052019615 = product of:
          0.10403923 = sum of:
            0.10403923 = weight(_text_:web in 1763) [ClassicSimilarity], result of:
              0.10403923 = score(doc=1763,freq=4.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.6119082 = fieldWeight in 1763, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1763)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Bibliothekarische Normdaten für Personen lassen sich im Web sinnvoll einsetzen.
  15. MacKay, B.; Watters, C.: ¬An examination of multisession web tasks (2012) 0.03
    0.025416005 = product of:
      0.05083201 = sum of:
        0.05083201 = product of:
          0.10166402 = sum of:
            0.10166402 = weight(_text_:web in 255) [ClassicSimilarity], result of:
              0.10166402 = score(doc=255,freq=22.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.59793836 = fieldWeight in 255, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=255)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Today, people perform many types of tasks on the web, including those that require multiple web sessions. In this article, we build on research about web tasks and present an in-depth evaluation of the types of tasks people perform on the web over multiple web sessions. Multisession web tasks are goal-based tasks that often contain subtasks requiring more than one web session to complete. We will detail the results of two longitudinal studies that we conducted to explore this topic. The first study was a weeklong web-diary study where participants self-reported information on their own multisession tasks. The second study was a monthlong field study where participants used a customized version of Firefox, which logged their interactions for both their own multisession tasks and their other web activity. The results from both studies found that people perform eight different types of multisession tasks, that these tasks often consist of several subtasks, that these lasted different lengths of time, and that users have unique strategies to help continue the tasks which involved a variety of web and browser tools such as search engines and bookmarks and external applications such as Notepad or Word. Using the results from these studies, we have suggested three guidelines for developers to consider when designing browser-tool features to help people perform these types of tasks: (a) to maintain a list of current multisession tasks, (b) to support multitasking, and (c) to manage task-related information between sessions.
  16. Oliveira Machado, L.M.; Souza, R.R.; Simões, M. da Graça: Semantic web or web of data? : a diachronic study (1999 to 2017) of the publications of Tim Berners-Lee and the World Wide Web Consortium (2019) 0.03
    0.025416005 = product of:
      0.05083201 = sum of:
        0.05083201 = product of:
          0.10166402 = sum of:
            0.10166402 = weight(_text_:web in 5300) [ClassicSimilarity], result of:
              0.10166402 = score(doc=5300,freq=22.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.59793836 = fieldWeight in 5300, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5300)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The web has been, in the last decades, the place where information retrieval achieved its maximum importance, given its ubiquity and the sheer volume of information. However, its exponential growth made the retrieval task increasingly hard, relying in its effectiveness on idiosyncratic and somewhat biased ranking algorithms. To deal with this problem, a "new" web, called the Semantic Web (SW), was proposed, bringing along concepts like "Web of Data" and "Linked Data," although the definitions and connections among these concepts are often unclear. Based on a qualitative approach built over a literature review, a definition of SW is presented, discussing the related concepts sometimes used as synonyms. It concludes that the SW is a comprehensive and ambitious construct that includes the great purpose of making the web a global database. It also follows the specifications developed and/or associated with its operationalization and the necessary procedures for the connection of data in an open format on the web. The goals of this comprehensive SW are the union of two outcomes still tenuously connected: the virtually unlimited possibility of connections between data-the web domain-with the potentiality of the automated inference of "intelligent" systems-the semantic component.
    Theme
    Semantic Web
  17. Bizer, C.; Mendes, P.N.; Jentzsch, A.: Topology of the Web of Data (2012) 0.03
    0.025276989 = product of:
      0.050553977 = sum of:
        0.050553977 = product of:
          0.101107955 = sum of:
            0.101107955 = weight(_text_:web in 425) [ClassicSimilarity], result of:
              0.101107955 = score(doc=425,freq=34.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.59466785 = fieldWeight in 425, product of:
                  5.8309517 = tf(freq=34.0), with freq of:
                    34.0 = termFreq=34.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=425)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The degree of structure of Web content is the determining factor for the types of functionality that search engines can provide. The more well structured the Web content is, the easier it is for search engines to understand Web content and provide advanced functionality, such as faceted filtering or the aggregation of content from multiple Web sites, based on this understanding. Today, most Web sites are generated from structured data that is stored in relational databases. Thus, it does not require too much extra effort for Web sites to publish this structured data directly on the Web in addition to HTML pages, and thus help search engines to understand Web content and provide improved functionality. An early approach to realize this idea and help search engines to understand Web content is Microformats, a technique for markingup structured data about specific types on entities-such as tags, blog posts, people, or reviews-within HTML pages. As Microformats are focused on a few entity types, the World Wide Web Consortium (W3C) started in 2004 to standardize RDFa as an alternative, more generic language for embedding any type of data into HTML pages. Today, major search engines such as Google, Yahoo, and Bing extract Microformat and RDFa data describing products, reviews, persons, events, and recipes from Web pages and use the extracted data to improve the user's search experience. The search engines have started to aggregate structured data from different Web sites and augment their search results with these aggregated information units in the form of rich snippets which combine, for instance, data This chapter gives an overview of the topology of the Web of Data that has been created by publishing data on the Web using the microformats RDFa, Microdata and Linked Data publishing techniques.
    Source
    Semantic search over the Web. Eds.: R. De Virgilio, et al
    Theme
    Semantic Web
  18. Derek Doran, D.; Gokhale, S.S.: ¬A classification framework for web robots (2012) 0.02
    0.024522282 = product of:
      0.049044564 = sum of:
        0.049044564 = product of:
          0.09808913 = sum of:
            0.09808913 = weight(_text_:web in 505) [ClassicSimilarity], result of:
              0.09808913 = score(doc=505,freq=8.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.5769126 = fieldWeight in 505, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0625 = fieldNorm(doc=505)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The behavior of modern web robots varies widely when they crawl for different purposes. In this article, we present a framework to classify these web robots from two orthogonal perspectives, namely, their functionality and the types of resources they consume. Applying the classification framework to a year-long access log from the UConn SoE web server, we present trends that point to significant differences in their crawling behavior.
  19. Fu, T.; Abbasi, A.; Chen, H.: ¬A focused crawler for Dark Web forums (2010) 0.02
    0.022989638 = product of:
      0.045979276 = sum of:
        0.045979276 = product of:
          0.09195855 = sum of:
            0.09195855 = weight(_text_:web in 3471) [ClassicSimilarity], result of:
              0.09195855 = score(doc=3471,freq=18.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.5408555 = fieldWeight in 3471, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3471)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The unprecedented growth of the Internet has given rise to the Dark Web, the problematic facet of the Web associated with cybercrime, hate, and extremism. Despite the need for tools to collect and analyze Dark Web forums, the covert nature of this part of the Internet makes traditional Web crawling techniques insufficient for capturing such content. In this study, we propose a novel crawling system designed to collect Dark Web forum content. The system uses a human-assisted accessibility approach to gain access to Dark Web forums. Several URL ordering features and techniques enable efficient extraction of forum postings. The system also includes an incremental crawler coupled with a recall-improvement mechanism intended to facilitate enhanced retrieval and updating of collected content. Experiments conducted to evaluate the effectiveness of the human-assisted accessibility approach and the recall-improvement-based, incremental-update procedure yielded favorable results. The human-assisted approach significantly improved access to Dark Web forums while the incremental crawler with recall improvement also outperformed standard periodic- and incremental-update approaches. Using the system, we were able to collect over 100 Dark Web forums from three regions. A case study encompassing link and content analysis of collected forums was used to illustrate the value and importance of gathering and analyzing content from such online communities.
  20. Mahesh, K.; Karanth, P.: ¬A novel knowledge organization scheme for the Web : superlinks with semantic roles (2012) 0.02
    0.022989638 = product of:
      0.045979276 = sum of:
        0.045979276 = product of:
          0.09195855 = sum of:
            0.09195855 = weight(_text_:web in 822) [ClassicSimilarity], result of:
              0.09195855 = score(doc=822,freq=18.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.5408555 = fieldWeight in 822, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=822)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We discuss the needs of a knowledge organization scheme for supporting Web-based software applications. We show how it differs from traditional knowledge organization schemes due to the virtual, dynamic, ad-hoc, userspecific and application-specific nature of Web-based knowledge. The sheer size of Web resources also adds to the complexity of organizing knowledge on the Web. As such, a standard, global scheme such as a single ontology for classifying and organizing all Web-based content is unrealistic. There is nevertheless a strong and immediate need for effective knowledge organization schemes to improve the efficiency and effectiveness of Web-based applications. In this context, we propose a novel knowledge organization scheme wherein concepts in the ontology of a domain are semantically interlinked with specific pieces of Web-based content using a rich hyper-linking structure known as Superlinks with well-defined semantic roles. We illustrate how such a knowledge organization scheme improves the efficiency and effectiveness of a Web-based e-commerce retail store.

Languages

  • e 68
  • d 43

Types

  • a 94
  • m 13
  • el 10
  • s 3
  • x 1
  • More… Less…

Subjects

Classifications