Search (121 results, page 1 of 7)

  • × theme_ss:"Internet"
  • × year_i:[2010 TO 2020}
  1. Social Media und Web Science : das Web als Lebensraum, Düsseldorf, 22. - 23. März 2012, Proceedings, hrsg. von Marlies Ockenfeld, Isabella Peters und Katrin Weller. DGI, Frankfurt am Main 2012 (2012) 0.09
    0.09480259 = product of:
      0.18960518 = sum of:
        0.09537092 = weight(_text_:wide in 1517) [ClassicSimilarity], result of:
          0.09537092 = score(doc=1517,freq=4.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.4846142 = fieldWeight in 1517, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1517)
        0.073171996 = weight(_text_:web in 1517) [ClassicSimilarity], result of:
          0.073171996 = score(doc=1517,freq=8.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.50479853 = fieldWeight in 1517, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1517)
        0.021062255 = product of:
          0.04212451 = sum of:
            0.04212451 = weight(_text_:22 in 1517) [ClassicSimilarity], result of:
              0.04212451 = score(doc=1517,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.2708308 = fieldWeight in 1517, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1517)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    RSWK
    Soziale Software / World Wide Web 2.0 / Kongress / Düsseldorf <2012>
    Subject
    Soziale Software / World Wide Web 2.0 / Kongress / Düsseldorf <2012>
  2. Stuart, D.: Web metrics for library and information professionals (2014) 0.06
    0.06417656 = product of:
      0.19252968 = sum of:
        0.075397335 = weight(_text_:wide in 2274) [ClassicSimilarity], result of:
          0.075397335 = score(doc=2274,freq=10.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.38312116 = fieldWeight in 2274, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2274)
        0.11713234 = weight(_text_:web in 2274) [ClassicSimilarity], result of:
          0.11713234 = score(doc=2274,freq=82.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.808072 = fieldWeight in 2274, product of:
              9.055386 = tf(freq=82.0), with freq of:
                82.0 = termFreq=82.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2274)
      0.33333334 = coord(2/6)
    
    Abstract
    This is a practical guide to using web metrics to measure impact and demonstrate value. The web provides an opportunity to collect a host of different metrics, from those associated with social media accounts and websites to more traditional research outputs. This book is a clear guide for library and information professionals as to what web metrics are available and how to assess and use them to make informed decisions and demonstrate value. As individuals and organizations increasingly use the web in addition to traditional publishing avenues and formats, this book provides the tools to unlock web metrics and evaluate the impact of this content. The key topics covered include: bibliometrics, webometrics and web metrics; data collection tools; evaluating impact on the web; evaluating social media impact; investigating relationships between actors; exploring traditional publications in a new environment; web metrics and the web of data; the future of web metrics and the library and information professional. The book will provide a practical introduction to web metrics for a wide range of library and information professionals, from the bibliometrician wanting to demonstrate the wider impact of a researcher's work than can be demonstrated through traditional citations databases, to the reference librarian wanting to measure how successfully they are engaging with their users on Twitter. It will be a valuable tool for anyone who wants to not only understand the impact of content, but demonstrate this impact to others within the organization and beyond.
    Content
    1. Introduction. MetricsIndicators -- Web metrics and Ranganathan's laws of library science -- Web metrics for the library and information professional -- The aim of this book -- The structure of the rest of this book -- 2. Bibliometrics, webometrics and web metrics. Web metrics -- Information science metrics -- Web analytics -- Relational and evaluative metrics -- Evaluative web metrics -- Relational web metrics -- Validating the results -- 3. Data collection tools. The anatomy of a URL, web links and the structure of the web -- Search engines 1.0 -- Web crawlers -- Search engines 2.0 -- Post search engine 2.0: fragmentation -- 4. Evaluating impact on the web. Websites -- Blogs -- Wikis -- Internal metrics -- External metrics -- A systematic approach to content analysis -- 5. Evaluating social media impact. Aspects of social network sites -- Typology of social network sites -- Research and tools for specific sites and services -- Other social network sites -- URL shorteners: web analytic links on any site -- General social media impact -- Sentiment analysis -- 6. Investigating relationships between actors. Social network analysis methods -- Sources for relational network analysis -- 7. Exploring traditional publications in a new environment. More bibliographic items -- Full text analysis -- Greater context -- 8. Web metrics and the web of data. The web of data -- Building the semantic web -- Implications of the web of data for web metrics -- Investigating the web of data today -- SPARQL -- Sindice -- LDSpider: an RDF web crawler -- 9. The future of web metrics and the library and information professional. How far we have come -- The future of web metrics -- The future of the library and information professional and web metrics.
    RSWK
    Bibliothek / World Wide Web / World Wide Web 2.0 / Analyse / Statistik
    Bibliometrie / Semantic Web / Soziale Software
    Subject
    Bibliothek / World Wide Web / World Wide Web 2.0 / Analyse / Statistik
    Bibliometrie / Semantic Web / Soziale Software
  3. Webwissenschaft : eine Einführung (2010) 0.06
    0.06265454 = product of:
      0.1879636 = sum of:
        0.08343218 = weight(_text_:wide in 2870) [ClassicSimilarity], result of:
          0.08343218 = score(doc=2870,freq=6.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.42394912 = fieldWeight in 2870, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2870)
        0.104531415 = weight(_text_:web in 2870) [ClassicSimilarity], result of:
          0.104531415 = score(doc=2870,freq=32.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.72114074 = fieldWeight in 2870, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2870)
      0.33333334 = coord(2/6)
    
    Abstract
    Das World Wide Web unterscheidet sich strukturell erheblich von den traditionellen Medien und hat das Mediensystem von Grund auf verändert. Radikal sind die Auswirkungen der webtechnischen Innovation sowohl für die Medienlandschaft und die Gesellschaft als auch für diejenigen Wissenschaften, die sich mit Medien - deren Geschichte, Inhalten, Formen, Technik, Wirkungen usf. - befassen. In dieser Einführung werden vor diesem Hintergrund einerseits Forschungsfragen einer zukünftigen Webwissenschaft auf einer übergeordneten Ebene diskutiert, andererseits werden die Perspektiven der relevanten Bezugswissenschaften integriert.
    Content
    Inhalt: Ist das Web ein Medium? --Konrad Scherfer Warum und zu welchem Zweck benötigen wir eine Webwissenschaft? 31- Helmut Volpers 'Diese Site wird nicht mehr gewartet'. Medienanalytische Perspektiven in den Medienwechseln - Rainer Leschke Emergente Öffentlichkeit? Bausteine zu einer Theorie der Weböffentlichkeit - Christoph Ernst Das ICH im Web - Auswirkungen virtueller Identitäten auf soziale Beziehungen - Helmut Volpers / Karin Wunder Technikgeschichte des Webs - Tom Alby Visuelles Denken im Interaktions- und Webdesign - Cyrus Khazaeli Das fotografische Bild im Web - Anja Bohnhof / Kolja Kracht Qualität im Web - Interdisziplinäre Website-Bewertung - David Kratz Für eine neue Poesie der Neugier. Das Web verändert den Journalismus - nicht nur online - Mercedes Bunz Das Web braucht Spezialisten, keine Generalisten. Zur Notwendigkeit einer webspezifischen Professionalisierung in der Ausbildung - Petra Werner Online-Forschung im Web - Methodenschwerpunkte im Überblick - Simone Fühles-Ubach Im Spiel der Moden? - Das Web in der Wirtschaft, die Wirtschaft im Web - Jörg Hoewner Medizin im Web - Martina Waitz Das Web und das Medienrecht - Bernd Holznagel / Thorsten Ricke Suchmaschinenforschung im Kontext einer zukünftigen Webwissenschaft - Dirk Lewandowski
    RSWK
    World Wide Web / Medienwissenschaft / Kommunikationswissenschaft
    Web Site / Gestaltung
    Subject
    World Wide Web / Medienwissenschaft / Kommunikationswissenschaft
    Web Site / Gestaltung
  4. Brügger, N.: ¬The archived Web : doing history in the digital age (2018) 0.06
    0.059830263 = product of:
      0.17949079 = sum of:
        0.07707134 = weight(_text_:wide in 5679) [ClassicSimilarity], result of:
          0.07707134 = score(doc=5679,freq=8.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.3916274 = fieldWeight in 5679, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=5679)
        0.10241945 = weight(_text_:web in 5679) [ClassicSimilarity], result of:
          0.10241945 = score(doc=5679,freq=48.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.70657074 = fieldWeight in 5679, product of:
              6.928203 = tf(freq=48.0), with freq of:
                48.0 = termFreq=48.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=5679)
      0.33333334 = coord(2/6)
    
    Abstract
    An original methodological framework for approaching the archived web, both as a source and as an object of study in its own right. As life continues to move online, the web becomes increasingly important as a source for understanding the past. But historians have yet to formulate a methodology for approaching the archived web as a source of study. How should the history of the present be written? In this book, Niels Brügger offers an original methodological framework for approaching the web of the past, both as a source and as an object of study in its own right. While many studies of the web focus solely on its use and users, Brügger approaches the archived web as a semiotic, textual system in order to offer the first book-length treatment of its scholarly use. While the various forms of the archived web can challenge researchers' interactions with it, they also present a range of possibilities for interpretation. The Archived Web identifies characteristics of the online web that are significant now for scholars, investigates how the online web became the archived web, and explores how the particular digitality of the archived web can affect a historian's research process. Brügger offers suggestions for how to translate traditional historiographic methods for the study of the archived web, focusing on provenance, creating an overview of the archived material, evaluating versions, and citing the material. The Archived Web lays the foundations for doing web history in the digital age, offering important and timely guidance for today's media scholars and tomorrow's historians.
    Content
    "How will the history of the present be written? As life continues to move online, the web becomes ever more important for an understanding of the past. This book offers an original theoretical framework for approaching the web of the past, both as a source and as an object of study in its own right"
    LCSH
    Web archives / Social aspects
    World Wide Web / History
    RSWK
    World Wide Web / Archivierung / Digitalisierung / Geschichtswissenschaft / Geschichtsschreibung
    Subject
    World Wide Web / Archivierung / Digitalisierung / Geschichtswissenschaft / Geschichtsschreibung
    Web archives / Social aspects
    World Wide Web / History
  5. Joint, N.: Web 2.0 and the library : a transformational technology? (2010) 0.05
    0.05485157 = product of:
      0.10970314 = sum of:
        0.03853567 = weight(_text_:wide in 4202) [ClassicSimilarity], result of:
          0.03853567 = score(doc=4202,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.1958137 = fieldWeight in 4202, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=4202)
        0.059131898 = weight(_text_:web in 4202) [ClassicSimilarity], result of:
          0.059131898 = score(doc=4202,freq=16.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.4079388 = fieldWeight in 4202, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=4202)
        0.012035574 = product of:
          0.024071148 = sum of:
            0.024071148 = weight(_text_:22 in 4202) [ClassicSimilarity], result of:
              0.024071148 = score(doc=4202,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.15476047 = fieldWeight in 4202, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4202)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    Purpose - This paper is the final one in a series which has tried to give an overview of so-called transformational areas of digital library technology. The aim has been to assess how much real transformation these applications can bring about, in terms of creating genuine user benefit and also changing everyday library practice. Design/methodology/approach - The paper provides a summary of some of the legal and ethical issues associated with web 2.0 applications in libraries, associated with a brief retrospective view of some relevant literature. Findings - Although web 2.0 innovations have had a massive impact on the larger World Wide Web, the practical impact on library service delivery has been limited to date. What probably can be termed transformational in the effect of web 2.0 developments on library and information work is their effect on some underlying principles of professional practice. Research limitations/implications - The legal and ethical challenges of incorporating web 2.0 platforms into mainstream institutional service delivery need to be subject to further research, so that the risks associated with these innovations are better understood at the strategic and policy-making level. Practical implications - This paper makes some recommendations about new principles of library and information practice which will help practitioners make better sense of these innovations in their overall information environment. Social implications - The paper puts in context some of the more problematic social impacts of web 2.0 innovations, without denying the undeniable positive contribution of social networking to the sphere of human interactivity. Originality/value - This paper raises some cautionary points about web 2.0 applications without adopting a precautionary approach of total prohibition. However, none of the suggestions or analysis in this piece should be considered to constitute legal advice. If such advice is required, the reader should consult appropriate legal professionals.
    Date
    22. 1.2011 17:54:04
  6. Yang, S.; Han, R.; Ding, J.; Song, Y.: ¬The distribution of Web citations (2012) 0.05
    0.0547482 = product of:
      0.16424459 = sum of:
        0.108632244 = weight(_text_:web in 2735) [ClassicSimilarity], result of:
          0.108632244 = score(doc=2735,freq=24.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.7494315 = fieldWeight in 2735, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2735)
        0.05561234 = weight(_text_:computer in 2735) [ClassicSimilarity], result of:
          0.05561234 = score(doc=2735,freq=4.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.34261024 = fieldWeight in 2735, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=2735)
      0.33333334 = coord(2/6)
    
    Abstract
    A substantial amount of research has focused on the persistence or availability of Web citations. The present study analyzes Web citation distributions. Web citations are defined as the mentions of the URLs of Web pages (Web resources) as references in academic papers. The present paper primarily focuses on the analysis of the URLs of Web citations and uses three sets of data, namely, Set 1 from the Humanities and Social Science Index in China (CSSCI, 1998-2009), Set 2 from the publications of two international computer science societies, Communications of the ACM and IEEE Computer (1995-1999), and Set 3 from the medical science database, MEDLINE, of the National Library of Medicine (1994-2006). Web citation distributions are investigated based on Web site types, Web page types, URL frequencies, URL depths, URL lengths, and year of article publication. Results show significant differences in the Web citation distributions among the three data sets. However, when the URLs of Web citations with the same hostnames are aggregated, the distributions in the three data sets are consistent with the power law (the Lotka function).
  7. Rogers, R.: Digital methods (2013) 0.05
    0.051765166 = product of:
      0.15529549 = sum of:
        0.07707134 = weight(_text_:wide in 2354) [ClassicSimilarity], result of:
          0.07707134 = score(doc=2354,freq=8.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.3916274 = fieldWeight in 2354, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=2354)
        0.078224145 = weight(_text_:web in 2354) [ClassicSimilarity], result of:
          0.078224145 = score(doc=2354,freq=28.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.5396523 = fieldWeight in 2354, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=2354)
      0.33333334 = coord(2/6)
    
    Abstract
    In Digital Methods, Richard Rogers proposes a methodological outlook for social and cultural scholarly research on the Web that seeks to move Internet research beyond the study of online culture. It is not a toolkit for Internet research, or operating instructions for a software package; it deals with broader questions. How can we study social media to learn something about society rather than about social media use? How can hyperlinks reveal not just the value of a Web site but the politics of association? Rogers proposes repurposing Web-native techniques for research into cultural change and societal conditions. We can learn to reapply such "methods of the medium" as crawling and crowd sourcing, PageRank and similar algorithms, tag clouds and other visualizations; we can learn how they handle hits, likes, tags, date stamps, and other Web-native objects. By "thinking along" with devices and the objects they handle, digital research methods can follow the evolving methods of the medium. Rogers uses this new methodological outlook to examine the findings of inquiries into 9/11 search results, the recognition of climate change skeptics by climate-change-related Web sites, the events surrounding the Srebrenica massacre according to Dutch, Serbian, Bosnian, and Croatian Wikipedias, presidential candidates' social media "friends," and the censorship of the Iranian Web. With Digital Methods, Rogers introduces a new vision and method for Internet research and at the same time applies them to the Web's objects of study, from tiny particles (hyperlinks) to large masses (social media).
    Content
    The end of the virtual : digital methods -- The link and the politics of Web space -- The website as archived object -- Googlization and the inculpable engine -- Search as research -- National Web studies -- Social media and post-demographics -- Wikipedia as cultural reference -- After cyberspace : big data, small data.
    LCSH
    Web search engines
    World Wide Web / Research
    RSWK
    Internet / Recherche / World Wide Web 2.0
    Subject
    Internet / Recherche / World Wide Web 2.0
    Web search engines
    World Wide Web / Research
  8. Oliveira Machado, L.M.; Souza, R.R.; Simões, M. da Graça: Semantic web or web of data? : a diachronic study (1999 to 2017) of the publications of Tim Berners-Lee and the World Wide Web Consortium (2019) 0.04
    0.04494749 = product of:
      0.13484247 = sum of:
        0.04816959 = weight(_text_:wide in 5300) [ClassicSimilarity], result of:
          0.04816959 = score(doc=5300,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.24476713 = fieldWeight in 5300, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5300)
        0.08667288 = weight(_text_:web in 5300) [ClassicSimilarity], result of:
          0.08667288 = score(doc=5300,freq=22.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.59793836 = fieldWeight in 5300, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5300)
      0.33333334 = coord(2/6)
    
    Abstract
    The web has been, in the last decades, the place where information retrieval achieved its maximum importance, given its ubiquity and the sheer volume of information. However, its exponential growth made the retrieval task increasingly hard, relying in its effectiveness on idiosyncratic and somewhat biased ranking algorithms. To deal with this problem, a "new" web, called the Semantic Web (SW), was proposed, bringing along concepts like "Web of Data" and "Linked Data," although the definitions and connections among these concepts are often unclear. Based on a qualitative approach built over a literature review, a definition of SW is presented, discussing the related concepts sometimes used as synonyms. It concludes that the SW is a comprehensive and ambitious construct that includes the great purpose of making the web a global database. It also follows the specifications developed and/or associated with its operationalization and the necessary procedures for the connection of data in an open format on the web. The goals of this comprehensive SW are the union of two outcomes still tenuously connected: the virtually unlimited possibility of connections between data-the web domain-with the potentiality of the automated inference of "intelligent" systems-the semantic component.
    Theme
    Semantic Web
  9. Geiselberger, H. u.a. [Red.]: Big Data : das neue Versprechen der Allwissenheit (2013) 0.04
    0.04203181 = product of:
      0.12609543 = sum of:
        0.0817465 = weight(_text_:wide in 2484) [ClassicSimilarity], result of:
          0.0817465 = score(doc=2484,freq=4.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.4153836 = fieldWeight in 2484, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=2484)
        0.04434892 = weight(_text_:web in 2484) [ClassicSimilarity], result of:
          0.04434892 = score(doc=2484,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.3059541 = fieldWeight in 2484, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2484)
      0.33333334 = coord(2/6)
    
    RSWK
    World Wide Web / Privatsphäre / Datenschutz / Aufsatzsammlung (BVB)
    Subject
    World Wide Web / Privatsphäre / Datenschutz / Aufsatzsammlung (BVB)
  10. Bizer, C.; Mendes, P.N.; Jentzsch, A.: Topology of the Web of Data (2012) 0.04
    0.04157816 = product of:
      0.124734476 = sum of:
        0.03853567 = weight(_text_:wide in 425) [ClassicSimilarity], result of:
          0.03853567 = score(doc=425,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.1958137 = fieldWeight in 425, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=425)
        0.08619881 = weight(_text_:web in 425) [ClassicSimilarity], result of:
          0.08619881 = score(doc=425,freq=34.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.59466785 = fieldWeight in 425, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=425)
      0.33333334 = coord(2/6)
    
    Abstract
    The degree of structure of Web content is the determining factor for the types of functionality that search engines can provide. The more well structured the Web content is, the easier it is for search engines to understand Web content and provide advanced functionality, such as faceted filtering or the aggregation of content from multiple Web sites, based on this understanding. Today, most Web sites are generated from structured data that is stored in relational databases. Thus, it does not require too much extra effort for Web sites to publish this structured data directly on the Web in addition to HTML pages, and thus help search engines to understand Web content and provide improved functionality. An early approach to realize this idea and help search engines to understand Web content is Microformats, a technique for markingup structured data about specific types on entities-such as tags, blog posts, people, or reviews-within HTML pages. As Microformats are focused on a few entity types, the World Wide Web Consortium (W3C) started in 2004 to standardize RDFa as an alternative, more generic language for embedding any type of data into HTML pages. Today, major search engines such as Google, Yahoo, and Bing extract Microformat and RDFa data describing products, reviews, persons, events, and recipes from Web pages and use the extracted data to improve the user's search experience. The search engines have started to aggregate structured data from different Web sites and augment their search results with these aggregated information units in the form of rich snippets which combine, for instance, data This chapter gives an overview of the topology of the Web of Data that has been created by publishing data on the Web using the microformats RDFa, Microdata and Linked Data publishing techniques.
    Source
    Semantic search over the Web. Eds.: R. De Virgilio, et al
    Theme
    Semantic Web
  11. Schillinger, T.; Winterschladen, S.: Was die Welt zusammenhält : Sieben Menschen schützen das Internet (2010) 0.04
    0.04045966 = product of:
      0.08091932 = sum of:
        0.034061044 = weight(_text_:wide in 3718) [ClassicSimilarity], result of:
          0.034061044 = score(doc=3718,freq=4.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.17307651 = fieldWeight in 3718, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3718)
        0.018478718 = weight(_text_:web in 3718) [ClassicSimilarity], result of:
          0.018478718 = score(doc=3718,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.12748088 = fieldWeight in 3718, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3718)
        0.028379556 = weight(_text_:computer in 3718) [ClassicSimilarity], result of:
          0.028379556 = score(doc=3718,freq=6.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.17483756 = fieldWeight in 3718, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3718)
      0.5 = coord(3/6)
    
    Content
    "Die Geschichte klingt wie eine Verschwörungstheorie oder ein Endzeit-Thriller. Es geht um das Internet. Es geht um sieben Menschen. Es geht um sieben Schlüssel, mit denen die Hüter das World Wide Web retten können. Der Plot geht so: Im Falle eines Cyberangriffs kommen die Bewahrer an einem geheimen Ort in den USA zusammen, um dort das Internet neu zu starten. So fiktiv es sich anhört: Seit vergangener Woche, mit Einführung des Online-Sicherheitssystems DNSSEC (Domain Name System Security), gibt es dieses geheimnisvolle Team tatsächlich. Das Internet ist ein verführerisches Ziel für Attacken. Mehr als zwei Drittel aller Deutschen sind online, weltweit jeder Fünfte. Online-Banking, Internetauktionen, soziale Netzwerke wie Facebook und Twitter - ein großer Teil unseres Lebens spielt sich in der virtuellen Welt ab. "Wenn das Internet weltweit lahm gelegt ist, ist die Welt lahm gelegt", sagt Isabell Unseld vom Anti-Viren-Spezialisten McAfee. Kaum vorstellbar, wenn Kriminelle diese Schwäche ausnutzen könnten. An diesem Punkt der Geschichte kommen die sieben Herrscher über das Internet wieder ins Spiel. Sie leben in Tschechien, Kanada, China, Trinidad Tobago, Burkina Faso, USA und Großbritannien. Einer von ihnen hat sich jetzt verraten. Paul Kane, ein Engländer, hat erzählt, dass er seinen Schlüssel in einer bombensicheren Tasche in einem Tresor aufbewahre.
    Dass es die sieben Retter tatsächlich gibt, sagt auch Costin Raiu, Chef des weltweiten Virenanalystenteams der Internet-Sicherheitsfirma Kaspersky: "Das ist kein Märchen, das ist wirklich so. Ich habe zwar noch keinen persönlich kennen gelernt, weil sie im Verborgenen agieren. Aber dass es sie gibt, ist bestätigt." Sollte einmal das Netz beschädigt werden müssen fünf der sieben Auserwählten zusammenkommen, um mit ihren Freischalt-Karten gemeinsam die Cyberattacke abzuwehren. Das hat die Non-Profit-Organisation ICANN bekanntgegeben, die als eine Art Weltregierung des Netzes gilt. Dass es technisch gar nicht so kompliziert ist, das Internet grundlegend zu stören, weiß man seit rund zwei Jahren, als Sicherheitsexperten ein kritisches Problem beim so genannten Domain Name System (DNS) ausmachten. Das DNS ist dafür zuständig, die Ziffern-Adresse einer Internetseite mit einem für den Nutzer leicht merkbaren Namen zu verknüpfen. "Bislang war es Kriminellen möglich, sich bei diesem Schritt dazwischen zu schalten, um damit Firmen auszuspionieren oder im einfachsten Fall Suchanfragen auf eine Seite lenken, um damit Aufmerksamkeit zur erregen", sagt Jürgen Kuri, Internetexperte der Computer-Fachzeitschrift CT.
    "Gefährlicher Alltag Sicherheit und Überlegenheit in Forschung, Wirtschaft und Militär waren Antriebe für die Entwicklung des Internets und sie sind zugleich auch eine Gefahr für die Online-Nutzer. Seit in fast allen Büros in den Industrie- und Schwellenländern Computer mit Internetanschluss stehen, spähen Datendiebe im Auftrag von Staaten und Firmen Forschungsinformationen oder Regierungspläne aus. Cyberwar ist kein Kino mehr, es ist Alltag. Das Bundesamt für Sicherheit und Informationstechnik und der Verfassungsschutz warnen regelmäßig vor Angriffen auf die Computer der Bundesbehörden. Vornehmlich handelt es sich um Versuche, sensible Berichte zu erhalten. Forschungsinstitute und forschende Unternehmen sind ebenso Ziel von Spionageangriffen aus dem Netz. Aufgrund der Sensibilität des Internets ernannte beispielsweise US-Präsident Barack Obama Cybersicherheit zu einer der wichtigsten Aufgabe innerhalb der Landesverteidigung. Dieses Vorgehen dürfte nicht zuletzt auch aus der Erfahrung der USA selbst resultieren, die im Kosovo-Krieg auch Cyber-Waffen eingesetzt haben sollen: Manipulation von Luftabwehr und Telefonnetzen sowie von Bankkonten."
    "Am Anfang war der Schock Ohne Satelliten gäbe es womöglich gar kein Internet. Denn als Antwort auf den Sputnik-Schock gründet die US-Regierung 1957 ihre neues Forschungszentrum Arpa (Advanced Research Projects Agency). Nachdem die Wissenschaftler in 18 Monaten den ersten US-Satelliten gebaut haben, entwicklen sie Computernetzwerke. Die Dezentralität ist ihnen besonders wichtig. Die verschickten Informationen werden in Pakete zerteilt und über verschiedene Wege zum Empfänger geschickt. Fällt unterwegs ein Knoten aus, etwa im Krieg, suchen sich die Teile der Botschaft den unbeschädigten Weg. Die Kommunikation ist so kaum zu unterbinden. Das Arpanet gilt als die Geburtsstunde des Internet. Die Universität Los Angeles (UCLA) verbindet sich 1969 mit dem Rechner von Stanford. Als erste Botschaft ist das Wort "login" verabredet. Doch nach dem L und dem O ist Schluss. Die Verbindung ist zusammengebrochen, die Revolution hat begonnen. Über das rasant wachsende Netz tauschen sich Forscher aus. Damals lässt sich das Netz aber nur von Profis nutzen. Erst mit der Entwicklung des World Wide Web am Kernforschungszentrum Cern bekommt das Netz Anfang der 90er Jahre das Gesicht, das wir heute kennen."
  12. Villela Dantas, J.R.; Muniz Farias, P.F.: Conceptual navigation in knowledge management environments using NavCon (2010) 0.04
    0.04017412 = product of:
      0.12052235 = sum of:
        0.057803504 = weight(_text_:wide in 4230) [ClassicSimilarity], result of:
          0.057803504 = score(doc=4230,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.29372054 = fieldWeight in 4230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=4230)
        0.062718846 = weight(_text_:web in 4230) [ClassicSimilarity], result of:
          0.062718846 = score(doc=4230,freq=8.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.43268442 = fieldWeight in 4230, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4230)
      0.33333334 = coord(2/6)
    
    Abstract
    This article presents conceptual navigation and NavCon, an architecture that implements this navigation in World Wide Web pages. NavCon architecture makes use of ontology as metadata to contextualize user search for information. Based on ontologies, NavCon automatically inserts conceptual links in Web pages. By using these links, the user may navigate in a graph representing ontology concepts and their relationships. By browsing this graph, it is possible to reach documents associated with the user desired ontology concept. This Web navigation supported by ontology concepts we call conceptual navigation. Conceptual navigation is a technique to browse Web sites within a context. The context filters relevant retrieved information. The context also drives user navigation through paths that meet his needs. A company may implement conceptual navigation to improve user search for information in a knowledge management environment. We suggest that the use of an ontology to conduct navigation in an Intranet may help the user to have a better understanding about the knowledge structure of the company.
  13. Nejdl, W.; Risse, T.: Herausforderungen für die nationale, regionale und thematische Webarchivierung und deren Nutzung (2015) 0.04
    0.04017412 = product of:
      0.12052235 = sum of:
        0.057803504 = weight(_text_:wide in 2531) [ClassicSimilarity], result of:
          0.057803504 = score(doc=2531,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.29372054 = fieldWeight in 2531, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=2531)
        0.062718846 = weight(_text_:web in 2531) [ClassicSimilarity], result of:
          0.062718846 = score(doc=2531,freq=8.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.43268442 = fieldWeight in 2531, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2531)
      0.33333334 = coord(2/6)
    
    Abstract
    Das World Wide Web ist als weltweites Informations- und Kommunikationsmedium etabliert. Neue Technologien erweitern regelmäßig die Nutzungsformen und erlauben es auch unerfahrenen Nutzern, Inhalte zu publizieren oder an Diskussionen teilzunehmen. Daher wird das Web auch als eine gute Dokumentation der heutigen Gesellschaft angesehen. Aufgrund seiner Dynamik sind die Inhalte des Web vergänglich und neue Technologien und Nutzungsformen stellen regelmäßig neue Herausforderungen an die Sammlung von Webinhalten für die Webarchivierung. Dominierten in den Anfangstagen der Webarchivierung noch statische Seiten, so hat man es heute häufig mit dynamisch generierten Inhalten zu tun, die Informationen aus verschiedenen Quellen integrieren. Neben dem klassischen domainorientieren Webharvesting kann auch ein steigendes Interesse aus verschiedenen Forschungsdisziplinen an thematischen Webkollektionen und deren Nutzung und Exploration beobachtet werden. In diesem Artikel werden einige Herausforderungen und Lösungsansätze für die Sammlung von thematischen und dynamischen Inhalten aus dem Web und den sozialen Medien vorgestellt. Des Weiteren werden aktuelle Probleme der wissenschaftlichen Nutzung diskutiert und gezeigt, wie Webarchive und andere temporale Kollektionen besser durchsucht werden können.
  14. Klic, L.; Miller, M.; Nelson, J.K.; Germann, J.E.: Approaching the largest 'API' : extracting information from the Internet with Python (2018) 0.04
    0.04017412 = product of:
      0.12052235 = sum of:
        0.057803504 = weight(_text_:wide in 4239) [ClassicSimilarity], result of:
          0.057803504 = score(doc=4239,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.29372054 = fieldWeight in 4239, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=4239)
        0.062718846 = weight(_text_:web in 4239) [ClassicSimilarity], result of:
          0.062718846 = score(doc=4239,freq=8.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.43268442 = fieldWeight in 4239, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4239)
      0.33333334 = coord(2/6)
    
    Abstract
    This article explores the need for libraries to algorithmically access and manipulate the world's largest API: the Internet. The billions of pages on the 'Internet API' (HTTP, HTML, CSS, XPath, DOM, etc.) are easily accessible and manipulable. Libraries can assist in creating meaning through the datafication of information on the world wide web. Because most information is created for human consumption, some programming is required for automated extraction. Python is an easy-to-learn programming language with extensive packages and community support for web page automation. Four packages (Urllib, Selenium, BeautifulSoup, Scrapy) in Python can automate almost any web page for all sized projects. An example warrant data project is explained to illustrate how well Python packages can manipulate web pages to create meaning through assembling custom datasets.
  15. Firnkes, M.: ¬Das gekaufte Web : wie wir online manipuliert werden (2015) 0.04
    0.03737321 = product of:
      0.11211963 = sum of:
        0.057803504 = weight(_text_:wide in 2117) [ClassicSimilarity], result of:
          0.057803504 = score(doc=2117,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.29372054 = fieldWeight in 2117, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=2117)
        0.054316122 = weight(_text_:web in 2117) [ClassicSimilarity], result of:
          0.054316122 = score(doc=2117,freq=6.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.37471575 = fieldWeight in 2117, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2117)
      0.33333334 = coord(2/6)
    
    Abstract
    Was wir online lesen und sehen, auf Webseiten, in Blogs und sozialen Netzwerken, das ist immer öfter verfremdet und manipuliert. Gefälschte Inhalte werden genutzt, um versteckte Werbung zu platzieren und Einnahmen zu generieren, aber auch um die öffentliche Meinung zu Gunsten von Interessensverbänden und der Politik zu steuern. Neue Technologien der digitalen Welt befeuern den Trend zu rein künstlich generiertem Content. Wir sind an einem Punkt angelangt, an dem wir uns entscheiden müssen: Zwischen einem "freien" oder einem von kommerziellen Interessen beherrschten World Wide Web. Das Buch deckt auf verständliche Weise die unterschiedlichen Methoden der Manipulation auf. Es zeigt, wie fremdgesteuerte Inhalte alle Internetnutzer betreffen, geht aber gleichzeitig auf mögliche Auswege und Lösungsmöglichkeiten ein. Als Plädoyer für ein nachhaltig unabhängiges Internet.
    Theme
    Semantic Web
  16. Majica, M.: ¬Eine ganz große Nummer : dem User eröffnet die Umstellung viele ungekannte Möglicchkeiten - zumindest in Zukunft (2012) 0.04
    0.036193818 = product of:
      0.072387636 = sum of:
        0.028901752 = weight(_text_:wide in 224) [ClassicSimilarity], result of:
          0.028901752 = score(doc=224,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.14686027 = fieldWeight in 224, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0234375 = fieldNorm(doc=224)
        0.015679711 = weight(_text_:web in 224) [ClassicSimilarity], result of:
          0.015679711 = score(doc=224,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.108171105 = fieldWeight in 224, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0234375 = fieldNorm(doc=224)
        0.02780617 = weight(_text_:computer in 224) [ClassicSimilarity], result of:
          0.02780617 = score(doc=224,freq=4.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.17130512 = fieldWeight in 224, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0234375 = fieldNorm(doc=224)
      0.5 = coord(3/6)
    
    Abstract
    An diesem Mittwoch ändert sich die Architektur des World Wide Web: Provider, Betreiber von Webseiten und Hersteller von Computern und Smartphones stellen auf den neuen Adressstandard IPv6 um.
    Content
    "Nun beginnt für das Internet eine neue Zeitrechnung. Die Organisation Internet Society spricht von einem Meilenstein auf dem Weg in die Zukunft: Am 6. Juni wird das neue Internet Protocol Version 6 eingeführt, Insidern besser bekannt unter der Abkürzung IPv6. Für diese technische Neuerung wurde es nach Ansicht von Experten höchste Zeit. Wir dokumentieren die wichtigsten Fragen zum Start, dem IPv6 Launch Day. IPv6 kommt - wird alles anders? In den Maschinenräumen des Internets schon. Doch so groß die Veränderungen im Inneren auch sind: Äußerlich wird sich erst mal nichts ändern. Auf den ersten, zweiten und auch dritten Blick wird das Internet nach diesem 6. Juni genauso funktionieren wie sonst. Warum wird diese Umstellung überhaupt gemacht? Weil die bisherigen IP-Adressen knapp geworden sind. Mit dem bisher üblichen System können rund vier Milliarden Adressen ausgeben werden. Da die Zahl der weltweiten PC, Tablet-Computer, Smartphone, Spiele-Konsolen und ähnlichen Geräten rasant gestiegen ist, gibt es längst zu wenige Adressen. Bisher funktioniert das Internet nur deshalb meist reibungslos, weil all diese Geräte nicht gleichzeitig online sind.
    Wie viel mehr neue IP-Adressen sind denn nun möglich? Mit dem neuen IPv6 stehen 340 Sextillionen Adressen zur Verfügung - das ist eine 34 mit 37 Nullen. Das reicht zumindest fürs Erste. Was sind IP-Adressen überhaupt? Sie sind quasi die Adresse von Computern, an die Daten oder Anfragen geschickt werden. Jede Homepage hat eine eine solche Adresse, und jeder Internetnutzer auch. IP-Adressen in der bisher gängigen Version 4 bestehen aus vier Blöcken maximal dreistelliger Zahlen mit Werten zwischen 0 und 255. So hat etwa die Webseite des Chaos Computer Clubs die IP-Adresse 213.73.89.122. Da sich aber www.ccc.de leichter merken lässt, übersetzen sogenannte DNS-Server zwischen den Wort- und den Zahl-Adressen. Und was ist anders an IPv6? Die Adressen sehen zum einen anders aus: Der Wikipedia-Eintrag zu IPv6 findet sich etwa unter 2001:0db8:85a3:08d3:1319:8a2e:0370:7344. Zum anderen enthalten sie 128 Bit Information statt bisher 32 Bit. Dadurch vergrößert sich die Anzahl möglicher Adressen, der sogenannte Adressraum. Außerdem wird durch den neuartigen Aufbau auch die Verwaltung in den Innereien des Internets vereinfacht. Deshalb haben zahllose Unternehmen und Initiativen seit Jahren an der Einführung des neuen Protokolls gearbeitet. Große Software-Firmen wie Microsoft, Apple oder Google haben die meisten aktuellen Programme längst so überarbeitet, dass sie IPv6 "sprechen". Sind die alten Adressen damit überholt? Für einige Jahre werden beide Protokoll-Versionen parallel laufen, auch um einen reibungslosen Übergang zu gewährleisten. Die Netzbetreiber müssen sich bei der Umstellung auf IPv6 ohnehin erst vom Skelett des Internets zum einzelnen Kunden vorarbeiten. Während die zentrale Infrastruktur etwa bei der Telekom bereits IPv6 beherrsche, wie deren Sprecher Ralf Sauerzapf erläutert, soll es bis Ende dieses Jahres immerhin bei bis zu 800.000 Endkunden angekommen sein. Dies gelte aber nur für Kunden, die auf neue "IP-Anschlüsse" umstellen.
  17. Egbert, J.; Biber, D.; Davies, M.: Developing a bottom-up, user-based method of web register classification (2015) 0.04
    0.035583735 = product of:
      0.1067512 = sum of:
        0.08869784 = weight(_text_:web in 2158) [ClassicSimilarity], result of:
          0.08869784 = score(doc=2158,freq=16.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.6119082 = fieldWeight in 2158, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2158)
        0.01805336 = product of:
          0.03610672 = sum of:
            0.03610672 = weight(_text_:22 in 2158) [ClassicSimilarity], result of:
              0.03610672 = score(doc=2158,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.23214069 = fieldWeight in 2158, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2158)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper introduces a project to develop a reliable, cost-effective method for classifying Internet texts into register categories, and apply that approach to the analysis of a large corpus of web documents. To date, the project has proceeded in 2 key phases. First, we developed a bottom-up method for web register classification, asking end users of the web to utilize a decision-tree survey to code relevant situational characteristics of web documents, resulting in a bottom-up identification of register and subregister categories. We present details regarding the development and testing of this method through a series of 10 pilot studies. Then, in the second phase of our project we applied this procedure to a corpus of 53,000 web documents. An analysis of the results demonstrates the effectiveness of these methods for web register classification and provides a preliminary description of the types and distribution of registers on the web.
    Date
    4. 8.2015 19:22:04
  18. Schaarwächter, M.: InetBib: Etabliert (2010) 0.03
    0.034674477 = product of:
      0.10402343 = sum of:
        0.067437425 = weight(_text_:wide in 3477) [ClassicSimilarity], result of:
          0.067437425 = score(doc=3477,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.342674 = fieldWeight in 3477, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3477)
        0.036585998 = weight(_text_:web in 3477) [ClassicSimilarity], result of:
          0.036585998 = score(doc=3477,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.25239927 = fieldWeight in 3477, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3477)
      0.33333334 = coord(2/6)
    
    Abstract
    Die 1994 gegründete Mailingliste hat zurzeit 6500 Teilnehmer / Fachbeiträge, turbulente Diskussionen und jede Menge Stellenanzeigen InetBib ist seit 1994, also seit der Steinzeit des breit genutzten Word Wide Web, eine Gruppe von Personen, die auf elektronischen Wegen über Internetnutzung in Bibliotheken diskutieren. Steinzeit? Ist InetBib ein Fossil? Laut Wikipedia ist ein Fossil ein Zeugnis vergangenen Lebens aus der Erdgeschichte. Das Attribut vergangen passt aber in keiner Weise auf InetBib: Obwohl schon oft totgesagt erfreut sich dieser Mailverteiler immer größerer Beliebtheit. Zurzeit etwa 6500 Teilnehmer lesen und schreiben über neue Projekte, Ideen und Stellenangebote rund um Bibliotheken im Allgemeinen und Informationsvermittlung im Besonderen.
  19. Berners-Lee, T.: ¬The Father of the Web will give the Internet back to the people (2018) 0.03
    0.033748515 = product of:
      0.10124554 = sum of:
        0.05449767 = weight(_text_:wide in 4495) [ClassicSimilarity], result of:
          0.05449767 = score(doc=4495,freq=4.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.2769224 = fieldWeight in 4495, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=4495)
        0.04674787 = weight(_text_:web in 4495) [ClassicSimilarity], result of:
          0.04674787 = score(doc=4495,freq=10.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.32250395 = fieldWeight in 4495, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=4495)
      0.33333334 = coord(2/6)
    
    Content
    "This week, Berners-Lee will launch Inrupt ( https://www.password-online.de/?email_id=571&user_id=1045&urlpassed=aHR0cHM6Ly93d3cuaW5ydXB0LmNvbQ&controller=stats&action=analyse&wysija-page=1&wysijap=subscriptions ), a startup that he has been building, in stealth mode, for the past nine months. For years now, Berners-Lee and other internet activists have been dreaming of a digital utopia where individuals control their own data and the internet remains free and open. But for Berners-Lee, the time for dreaming is over. "We have to do it now," he says, displaying an intensity and urgency that is uncharacteristic for this soft-spoken academic. "It's a historical moment." If all goes as planned, Inrupt will be to Solid what Netscape once was for many first-time users of the web: an easy way in. . On his screen, there is a simple-looking web page with tabs across the top: Tim's to-do list, his calendar, chats, address book. He built this app-one of the first on Solid for his personal use. It is simple, spare. In fact, it's so plain that, at first glance, it's hard to see its significance. But to Berners-Lee, this is where the revolution begins. The app, using Solid's decentralized technology, allows Berners-Lee to access all of his data seamlessly-his calendar, his music library, videos, chat, research. It's like a mashup of Google Drive, Microsoft Outlook, Slack, Spotify, and WhatsApp. The difference here is that, on Solid, all the information is under his control. In: Exclusive: Tim Berners-Lee tells us his radical new plan to upend the World Wide Web ( https://www.password-online.de/?email_id=571&user_id=1045&urlpassed=aHR0cHM6Ly93d3cuZmFzdGNvbXBhbnkuY29tLzkwMjQzOTM2L2V4Y2x1c2l2ZS10aW0tYmVybmVycy1sZWUtdGVsbHMtdXMtaGlzLXJhZGljYWwtbmV3LXBsYW4tdG8tdXBlbmQtdGhlLXdvcmxkLXdpZGUtd2Vi&controller=stats&action=analyse&wysija-page=1&wysijap=subscriptions ), in: https://www.fastcompany.com/90243936/exclusive-tim-berners-lee-tells-us-his-radical-new-plan-to-upend-the-world-wide-web ( https://www.password-online.de/?email_id=571&user_id=1045&urlpassed=aHR0cHM6Ly93d3cuZmFzdGNvbXBhbnkuY29tLzkwMjQzOTM2L2V4Y2x1c2l2ZS10aW0tYmVybmVycy1sZWUtdGVsbHMtdXMtaGlzLXJhZGljYWwtbmV3LXBsYW4tdG8tdXBlbmQtdGhlLXdvcmxkLXdpZGUtd2Vi&controller=stats&action=analyse&wysija-page=1&wysijap=subscriptions)."
  20. Qualman, E.: Socialnomics : wie Social-Media Wirtschaft und Gesellschaft verändern (2010) 0.03
    0.03023614 = product of:
      0.09070842 = sum of:
        0.05449767 = weight(_text_:wide in 3588) [ClassicSimilarity], result of:
          0.05449767 = score(doc=3588,freq=4.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.2769224 = fieldWeight in 3588, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=3588)
        0.036210746 = weight(_text_:web in 3588) [ClassicSimilarity], result of:
          0.036210746 = score(doc=3588,freq=6.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.24981049 = fieldWeight in 3588, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=3588)
      0.33333334 = coord(2/6)
    
    Footnote
    Rez. in Mitt. VÖB 63(2010) H.1/2, S.148-149 (M. Buzinkay): "Endlich wieder ein Buchtitel, der mich nicht nur gleich angesprochen, sondern auch das gehalten, was er versprochen hat. Eine vertiefende Lektüre in ein sehr aktuelles und sehr wichtiges Thema, dass sowohl Einzelpersonen wie auch Organisationen, Unternehmen und Vereine gleichermaßen beschäftigen muss: "Wie Social Media Wirtschaft und Gesellschaft verändern" heißt der Untertitel des Werkes von Erik Qualman. Der Autor liefert für seine Behauptungen ausgesuchte Beispiele, diese seine Argumentation untermauern. Schön ist, dass man diese Beispiele gleich als Hands-on Tipps für seine eigene Online Arbeit nutzen kann. Bei der schier unendlichen Anzahl von Beispielen muss man sich aber fragen, ob man je in der Lage sein wird, diese nützlichen Hinweise jemals nur annähernd im eigenen Unternehmen umzusetzen. Um es kurz zu fassen: man kann das Buch mit ins Bett nehmen und in einem durchlesen. Fad wird einem nicht, nur genug Post-its sollte man mitnehmen, um alles wichtige zu markieren und zu notieren. Am nächsten Morgen sollte man das Buch aber sein lassen, denn das Geheimnis von Socialnomics ist die Umsetzung im Web. Eine dringende Empfehlung an alle Marketing-Interessierten!"
    RSWK
    Unternehmen / World Wide Web 2.0 / Marketing (BVB)
    Subject
    Unternehmen / World Wide Web 2.0 / Marketing (BVB)

Languages

  • e 73
  • d 47

Types

  • a 100
  • m 15
  • el 12
  • s 4
  • x 1
  • More… Less…

Subjects

Classifications