Search (221 results, page 1 of 12)

  • × theme_ss:"Internet"
  • × year_i:[2010 TO 2020}
  1. Social Media und Web Science : das Web als Lebensraum, Düsseldorf, 22. - 23. März 2012, Proceedings, hrsg. von Marlies Ockenfeld, Isabella Peters und Katrin Weller. DGI, Frankfurt am Main 2012 (2012) 0.12
    0.119222365 = product of:
      0.3338226 = sum of:
        0.06362897 = weight(_text_:wide in 1517) [ClassicSimilarity], result of:
          0.06362897 = score(doc=1517,freq=4.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.4846142 = fieldWeight in 1517, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1517)
        0.048818428 = weight(_text_:web in 1517) [ClassicSimilarity], result of:
          0.048818428 = score(doc=1517,freq=8.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.50479853 = fieldWeight in 1517, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1517)
        0.07248254 = weight(_text_:elektronische in 1517) [ClassicSimilarity], result of:
          0.07248254 = score(doc=1517,freq=4.0), product of:
            0.14013545 = queryWeight, product of:
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.029633347 = queryNorm
            0.517232 = fieldWeight in 1517, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1517)
        0.13952453 = weight(_text_:kongress in 1517) [ClassicSimilarity], result of:
          0.13952453 = score(doc=1517,freq=4.0), product of:
            0.19442701 = queryWeight, product of:
              6.5610886 = idf(docFreq=169, maxDocs=44218)
              0.029633347 = queryNorm
            0.71761906 = fieldWeight in 1517, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.5610886 = idf(docFreq=169, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1517)
        0.009368123 = product of:
          0.028104367 = sum of:
            0.028104367 = weight(_text_:22 in 1517) [ClassicSimilarity], result of:
              0.028104367 = score(doc=1517,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.2708308 = fieldWeight in 1517, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1517)
          0.33333334 = coord(1/3)
      0.35714287 = coord(5/14)
    
    BK
    05.38 (Neue elektronische Medien) <Kommunikationswissenschaft>
    Classification
    05.38 (Neue elektronische Medien) <Kommunikationswissenschaft>
    RSWK
    Soziale Software / World Wide Web 2.0 / Kongress / Düsseldorf <2012>
    Subject
    Soziale Software / World Wide Web 2.0 / Kongress / Düsseldorf <2012>
  2. Stuart, D.: Web metrics for library and information professionals (2014) 0.05
    0.04769524 = product of:
      0.16693333 = sum of:
        0.05030312 = weight(_text_:wide in 2274) [ClassicSimilarity], result of:
          0.05030312 = score(doc=2274,freq=10.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.38312116 = fieldWeight in 2274, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2274)
        0.07814762 = weight(_text_:web in 2274) [ClassicSimilarity], result of:
          0.07814762 = score(doc=2274,freq=82.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.808072 = fieldWeight in 2274, product of:
              9.055386 = tf(freq=82.0), with freq of:
                82.0 = termFreq=82.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2274)
        0.027315384 = weight(_text_:bibliothek in 2274) [ClassicSimilarity], result of:
          0.027315384 = score(doc=2274,freq=4.0), product of:
            0.121660605 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.029633347 = queryNorm
            0.22452119 = fieldWeight in 2274, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2274)
        0.011167207 = weight(_text_:information in 2274) [ClassicSimilarity], result of:
          0.011167207 = score(doc=2274,freq=20.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.21466857 = fieldWeight in 2274, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2274)
      0.2857143 = coord(4/14)
    
    Abstract
    This is a practical guide to using web metrics to measure impact and demonstrate value. The web provides an opportunity to collect a host of different metrics, from those associated with social media accounts and websites to more traditional research outputs. This book is a clear guide for library and information professionals as to what web metrics are available and how to assess and use them to make informed decisions and demonstrate value. As individuals and organizations increasingly use the web in addition to traditional publishing avenues and formats, this book provides the tools to unlock web metrics and evaluate the impact of this content. The key topics covered include: bibliometrics, webometrics and web metrics; data collection tools; evaluating impact on the web; evaluating social media impact; investigating relationships between actors; exploring traditional publications in a new environment; web metrics and the web of data; the future of web metrics and the library and information professional. The book will provide a practical introduction to web metrics for a wide range of library and information professionals, from the bibliometrician wanting to demonstrate the wider impact of a researcher's work than can be demonstrated through traditional citations databases, to the reference librarian wanting to measure how successfully they are engaging with their users on Twitter. It will be a valuable tool for anyone who wants to not only understand the impact of content, but demonstrate this impact to others within the organization and beyond.
    BK
    06.00 Information und Dokumentation: Allgemeines
    Classification
    06.00 Information und Dokumentation: Allgemeines
    Content
    1. Introduction. MetricsIndicators -- Web metrics and Ranganathan's laws of library science -- Web metrics for the library and information professional -- The aim of this book -- The structure of the rest of this book -- 2. Bibliometrics, webometrics and web metrics. Web metrics -- Information science metrics -- Web analytics -- Relational and evaluative metrics -- Evaluative web metrics -- Relational web metrics -- Validating the results -- 3. Data collection tools. The anatomy of a URL, web links and the structure of the web -- Search engines 1.0 -- Web crawlers -- Search engines 2.0 -- Post search engine 2.0: fragmentation -- 4. Evaluating impact on the web. Websites -- Blogs -- Wikis -- Internal metrics -- External metrics -- A systematic approach to content analysis -- 5. Evaluating social media impact. Aspects of social network sites -- Typology of social network sites -- Research and tools for specific sites and services -- Other social network sites -- URL shorteners: web analytic links on any site -- General social media impact -- Sentiment analysis -- 6. Investigating relationships between actors. Social network analysis methods -- Sources for relational network analysis -- 7. Exploring traditional publications in a new environment. More bibliographic items -- Full text analysis -- Greater context -- 8. Web metrics and the web of data. The web of data -- Building the semantic web -- Implications of the web of data for web metrics -- Investigating the web of data today -- SPARQL -- Sindice -- LDSpider: an RDF web crawler -- 9. The future of web metrics and the library and information professional. How far we have come -- The future of web metrics -- The future of the library and information professional and web metrics.
    RSWK
    Bibliothek / World Wide Web / World Wide Web 2.0 / Analyse / Statistik
    Bibliometrie / Semantic Web / Soziale Software
    Subject
    Bibliothek / World Wide Web / World Wide Web 2.0 / Analyse / Statistik
    Bibliometrie / Semantic Web / Soziale Software
  3. Baumeister, H.; Schwärzel, K.: Wissenswelt Internet : Eine Infrastruktur und ihr Recht (2018) 0.04
    0.04387801 = product of:
      0.15357304 = sum of:
        0.03856498 = weight(_text_:wide in 5664) [ClassicSimilarity], result of:
          0.03856498 = score(doc=5664,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.29372054 = fieldWeight in 5664, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=5664)
        0.06212789 = weight(_text_:elektronische in 5664) [ClassicSimilarity], result of:
          0.06212789 = score(doc=5664,freq=4.0), product of:
            0.14013545 = queryWeight, product of:
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.029633347 = queryNorm
            0.4433417 = fieldWeight in 5664, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.046875 = fieldNorm(doc=5664)
        0.046826374 = weight(_text_:bibliothek in 5664) [ClassicSimilarity], result of:
          0.046826374 = score(doc=5664,freq=4.0), product of:
            0.121660605 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.029633347 = queryNorm
            0.38489348 = fieldWeight in 5664, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.046875 = fieldNorm(doc=5664)
        0.0060537956 = weight(_text_:information in 5664) [ClassicSimilarity], result of:
          0.0060537956 = score(doc=5664,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.116372846 = fieldWeight in 5664, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5664)
      0.2857143 = coord(4/14)
    
    Abstract
    Die Vernetzung von Informationen, die Wissen entstehen lässt, war der originäre Entstehungskontext des Internets und seiner populärsten Anwendung, des World Wide Webs. Aus dieser Perspektive stellt das Internet sowohl einen riesigen Speicher von Informationen als auch - ermöglicht durch seine Struktur des (Mit-)Teilens und Vernetzens - ein Medium zur Erzeugung, Organisation, Repräsentation und Vermittlung von Wissen dar. Mit seinen Anwendungen bildet es die Infrastruktur, auf die Wissenspraktiken zurückgreifen. Diese Infrastruktur ist von ihrer Entstehung bis zu ihren Zukunftsaussichten Gegenstand des vorliegenden Bandes, welcher im Stile eines Casebooks auch die juristischen Grundlagen und Herausforderungen herausarbeitet. Das Buch wendet sich insbesondere an Studierende der Bibliotheks- und Informationswissenschaft sowie Beschäftigte in Informationseinrichtungen.
    BK
    05.38 Neue elektronische Medien Kommunikationswissenschaft
    Classification
    05.38 Neue elektronische Medien Kommunikationswissenschaft
    Footnote
    Rez. in: Information - Wissenschaft und Praxis. 71(2020) H.1, S.65-66 (M. Ockenfeld).
    RSWK
    Bibliothek / Wissen / Internet
    Subject
    Bibliothek / Wissen / Internet
  4. Oliveira Machado, L.M.; Souza, R.R.; Simões, M. da Graça: Semantic web or web of data? : a diachronic study (1999 to 2017) of the publications of Tim Berners-Lee and the World Wide Web Consortium (2019) 0.03
    0.034252778 = product of:
      0.119884714 = sum of:
        0.032137483 = weight(_text_:wide in 5300) [ClassicSimilarity], result of:
          0.032137483 = score(doc=5300,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.24476713 = fieldWeight in 5300, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5300)
        0.057825863 = weight(_text_:web in 5300) [ClassicSimilarity], result of:
          0.057825863 = score(doc=5300,freq=22.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.59793836 = fieldWeight in 5300, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5300)
        0.008737902 = weight(_text_:information in 5300) [ClassicSimilarity], result of:
          0.008737902 = score(doc=5300,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.16796975 = fieldWeight in 5300, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5300)
        0.021183468 = weight(_text_:retrieval in 5300) [ClassicSimilarity], result of:
          0.021183468 = score(doc=5300,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.23632148 = fieldWeight in 5300, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5300)
      0.2857143 = coord(4/14)
    
    Abstract
    The web has been, in the last decades, the place where information retrieval achieved its maximum importance, given its ubiquity and the sheer volume of information. However, its exponential growth made the retrieval task increasingly hard, relying in its effectiveness on idiosyncratic and somewhat biased ranking algorithms. To deal with this problem, a "new" web, called the Semantic Web (SW), was proposed, bringing along concepts like "Web of Data" and "Linked Data," although the definitions and connections among these concepts are often unclear. Based on a qualitative approach built over a literature review, a definition of SW is presented, discussing the related concepts sometimes used as synonyms. It concludes that the SW is a comprehensive and ambitious construct that includes the great purpose of making the web a global database. It also follows the specifications developed and/or associated with its operationalization and the necessary procedures for the connection of data in an open format on the web. The goals of this comprehensive SW are the union of two outcomes still tenuously connected: the virtually unlimited possibility of connections between data-the web domain-with the potentiality of the automated inference of "intelligent" systems-the semantic component.
    Source
    Journal of the Association for Information Science and Technology. 70(2019) no.7, S.701-714
    Theme
    Semantic Web
  5. Johnson, E.H.: S R Ranganathan in the Internet age (2019) 0.02
    0.02386164 = product of:
      0.08351573 = sum of:
        0.03856498 = weight(_text_:wide in 5406) [ClassicSimilarity], result of:
          0.03856498 = score(doc=5406,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.29372054 = fieldWeight in 5406, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=5406)
        0.020922182 = weight(_text_:web in 5406) [ClassicSimilarity], result of:
          0.020922182 = score(doc=5406,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.21634221 = fieldWeight in 5406, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=5406)
        0.0060537956 = weight(_text_:information in 5406) [ClassicSimilarity], result of:
          0.0060537956 = score(doc=5406,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.116372846 = fieldWeight in 5406, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5406)
        0.01797477 = weight(_text_:retrieval in 5406) [ClassicSimilarity], result of:
          0.01797477 = score(doc=5406,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.20052543 = fieldWeight in 5406, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=5406)
      0.2857143 = coord(4/14)
    
    Abstract
    S R Ranganathan's ideas have influenced library classification since the inception of his Colon Classification in 1933. His address at Elsinore, "Library Classification Through a Century", was his grand vision of the century of progress in classification from 1876 to 1975, and looked to the future of faceted classification as the means to provide a cohesive system to organize the world's information. Fifty years later, the internet and its achievements, social ecology, and consequences present a far more complicated picture, with the library as he knew it as a very small part and the problems that he confronted now greatly exacerbated. The systematic nature of Ranganathan's canons, principles, postulates, and devices suggest that modern semantic algorithms could guide automatic subject tagging. The vision presented here is one of internet-wide faceted classification and retrieval, implemented as open, distributed facets providing unified faceted searching across all web sites.
  6. Joint, N.: Web 2.0 and the library : a transformational technology? (2010) 0.02
    0.022144219 = product of:
      0.07750476 = sum of:
        0.025709987 = weight(_text_:wide in 4202) [ClassicSimilarity], result of:
          0.025709987 = score(doc=4202,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.1958137 = fieldWeight in 4202, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=4202)
        0.039451245 = weight(_text_:web in 4202) [ClassicSimilarity], result of:
          0.039451245 = score(doc=4202,freq=16.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.4079388 = fieldWeight in 4202, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=4202)
        0.0069903214 = weight(_text_:information in 4202) [ClassicSimilarity], result of:
          0.0069903214 = score(doc=4202,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.1343758 = fieldWeight in 4202, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=4202)
        0.0053532133 = product of:
          0.016059639 = sum of:
            0.016059639 = weight(_text_:22 in 4202) [ClassicSimilarity], result of:
              0.016059639 = score(doc=4202,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.15476047 = fieldWeight in 4202, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4202)
          0.33333334 = coord(1/3)
      0.2857143 = coord(4/14)
    
    Abstract
    Purpose - This paper is the final one in a series which has tried to give an overview of so-called transformational areas of digital library technology. The aim has been to assess how much real transformation these applications can bring about, in terms of creating genuine user benefit and also changing everyday library practice. Design/methodology/approach - The paper provides a summary of some of the legal and ethical issues associated with web 2.0 applications in libraries, associated with a brief retrospective view of some relevant literature. Findings - Although web 2.0 innovations have had a massive impact on the larger World Wide Web, the practical impact on library service delivery has been limited to date. What probably can be termed transformational in the effect of web 2.0 developments on library and information work is their effect on some underlying principles of professional practice. Research limitations/implications - The legal and ethical challenges of incorporating web 2.0 platforms into mainstream institutional service delivery need to be subject to further research, so that the risks associated with these innovations are better understood at the strategic and policy-making level. Practical implications - This paper makes some recommendations about new principles of library and information practice which will help practitioners make better sense of these innovations in their overall information environment. Social implications - The paper puts in context some of the more problematic social impacts of web 2.0 innovations, without denying the undeniable positive contribution of social networking to the sphere of human interactivity. Originality/value - This paper raises some cautionary points about web 2.0 applications without adopting a precautionary approach of total prohibition. However, none of the suggestions or analysis in this piece should be considered to constitute legal advice. If such advice is required, the reader should consult appropriate legal professionals.
    Date
    22. 1.2011 17:54:04
  7. Villela Dantas, J.R.; Muniz Farias, P.F.: Conceptual navigation in knowledge management environments using NavCon (2010) 0.02
    0.019825058 = product of:
      0.09251694 = sum of:
        0.03856498 = weight(_text_:wide in 4230) [ClassicSimilarity], result of:
          0.03856498 = score(doc=4230,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.29372054 = fieldWeight in 4230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=4230)
        0.041844364 = weight(_text_:web in 4230) [ClassicSimilarity], result of:
          0.041844364 = score(doc=4230,freq=8.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.43268442 = fieldWeight in 4230, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4230)
        0.012107591 = weight(_text_:information in 4230) [ClassicSimilarity], result of:
          0.012107591 = score(doc=4230,freq=8.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23274569 = fieldWeight in 4230, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4230)
      0.21428572 = coord(3/14)
    
    Abstract
    This article presents conceptual navigation and NavCon, an architecture that implements this navigation in World Wide Web pages. NavCon architecture makes use of ontology as metadata to contextualize user search for information. Based on ontologies, NavCon automatically inserts conceptual links in Web pages. By using these links, the user may navigate in a graph representing ontology concepts and their relationships. By browsing this graph, it is possible to reach documents associated with the user desired ontology concept. This Web navigation supported by ontology concepts we call conceptual navigation. Conceptual navigation is a technique to browse Web sites within a context. The context filters relevant retrieved information. The context also drives user navigation through paths that meet his needs. A company may implement conceptual navigation to improve user search for information in a knowledge management environment. We suggest that the use of an ontology to conduct navigation in an Intranet may help the user to have a better understanding about the knowledge structure of the company.
    Source
    Information processing and management. 46(2010) no.4, S.413-425
  8. Klic, L.; Miller, M.; Nelson, J.K.; Germann, J.E.: Approaching the largest 'API' : extracting information from the Internet with Python (2018) 0.02
    0.019477464 = product of:
      0.09089483 = sum of:
        0.03856498 = weight(_text_:wide in 4239) [ClassicSimilarity], result of:
          0.03856498 = score(doc=4239,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.29372054 = fieldWeight in 4239, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=4239)
        0.041844364 = weight(_text_:web in 4239) [ClassicSimilarity], result of:
          0.041844364 = score(doc=4239,freq=8.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.43268442 = fieldWeight in 4239, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4239)
        0.0104854815 = weight(_text_:information in 4239) [ClassicSimilarity], result of:
          0.0104854815 = score(doc=4239,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.20156369 = fieldWeight in 4239, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4239)
      0.21428572 = coord(3/14)
    
    Abstract
    This article explores the need for libraries to algorithmically access and manipulate the world's largest API: the Internet. The billions of pages on the 'Internet API' (HTTP, HTML, CSS, XPath, DOM, etc.) are easily accessible and manipulable. Libraries can assist in creating meaning through the datafication of information on the world wide web. Because most information is created for human consumption, some programming is required for automated extraction. Python is an easy-to-learn programming language with extensive packages and community support for web page automation. Four packages (Urllib, Selenium, BeautifulSoup, Scrapy) in Python can automate almost any web page for all sized projects. An example warrant data project is explained to illustrate how well Python packages can manipulate web pages to create meaning through assembling custom datasets.
  9. Bizer, C.; Mendes, P.N.; Jentzsch, A.: Topology of the Web of Data (2012) 0.02
    0.018697591 = product of:
      0.087255426 = sum of:
        0.025709987 = weight(_text_:wide in 425) [ClassicSimilarity], result of:
          0.025709987 = score(doc=425,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.1958137 = fieldWeight in 425, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=425)
        0.05750958 = weight(_text_:web in 425) [ClassicSimilarity], result of:
          0.05750958 = score(doc=425,freq=34.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.59466785 = fieldWeight in 425, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=425)
        0.0040358636 = weight(_text_:information in 425) [ClassicSimilarity], result of:
          0.0040358636 = score(doc=425,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.0775819 = fieldWeight in 425, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=425)
      0.21428572 = coord(3/14)
    
    Abstract
    The degree of structure of Web content is the determining factor for the types of functionality that search engines can provide. The more well structured the Web content is, the easier it is for search engines to understand Web content and provide advanced functionality, such as faceted filtering or the aggregation of content from multiple Web sites, based on this understanding. Today, most Web sites are generated from structured data that is stored in relational databases. Thus, it does not require too much extra effort for Web sites to publish this structured data directly on the Web in addition to HTML pages, and thus help search engines to understand Web content and provide improved functionality. An early approach to realize this idea and help search engines to understand Web content is Microformats, a technique for markingup structured data about specific types on entities-such as tags, blog posts, people, or reviews-within HTML pages. As Microformats are focused on a few entity types, the World Wide Web Consortium (W3C) started in 2004 to standardize RDFa as an alternative, more generic language for embedding any type of data into HTML pages. Today, major search engines such as Google, Yahoo, and Bing extract Microformat and RDFa data describing products, reviews, persons, events, and recipes from Web pages and use the extracted data to improve the user's search experience. The search engines have started to aggregate structured data from different Web sites and augment their search results with these aggregated information units in the form of rich snippets which combine, for instance, data This chapter gives an overview of the topology of the Web of Data that has been created by publishing data on the Web using the microformats RDFa, Microdata and Linked Data publishing techniques.
    Source
    Semantic search over the Web. Eds.: R. De Virgilio, et al
    Theme
    Semantic Web
  10. Webwissenschaft : eine Einführung (2010) 0.02
    0.01791491 = product of:
      0.12540436 = sum of:
        0.055663757 = weight(_text_:wide in 2870) [ClassicSimilarity], result of:
          0.055663757 = score(doc=2870,freq=6.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.42394912 = fieldWeight in 2870, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2870)
        0.06974061 = weight(_text_:web in 2870) [ClassicSimilarity], result of:
          0.06974061 = score(doc=2870,freq=32.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.72114074 = fieldWeight in 2870, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2870)
      0.14285715 = coord(2/14)
    
    Abstract
    Das World Wide Web unterscheidet sich strukturell erheblich von den traditionellen Medien und hat das Mediensystem von Grund auf verändert. Radikal sind die Auswirkungen der webtechnischen Innovation sowohl für die Medienlandschaft und die Gesellschaft als auch für diejenigen Wissenschaften, die sich mit Medien - deren Geschichte, Inhalten, Formen, Technik, Wirkungen usf. - befassen. In dieser Einführung werden vor diesem Hintergrund einerseits Forschungsfragen einer zukünftigen Webwissenschaft auf einer übergeordneten Ebene diskutiert, andererseits werden die Perspektiven der relevanten Bezugswissenschaften integriert.
    Content
    Inhalt: Ist das Web ein Medium? --Konrad Scherfer Warum und zu welchem Zweck benötigen wir eine Webwissenschaft? 31- Helmut Volpers 'Diese Site wird nicht mehr gewartet'. Medienanalytische Perspektiven in den Medienwechseln - Rainer Leschke Emergente Öffentlichkeit? Bausteine zu einer Theorie der Weböffentlichkeit - Christoph Ernst Das ICH im Web - Auswirkungen virtueller Identitäten auf soziale Beziehungen - Helmut Volpers / Karin Wunder Technikgeschichte des Webs - Tom Alby Visuelles Denken im Interaktions- und Webdesign - Cyrus Khazaeli Das fotografische Bild im Web - Anja Bohnhof / Kolja Kracht Qualität im Web - Interdisziplinäre Website-Bewertung - David Kratz Für eine neue Poesie der Neugier. Das Web verändert den Journalismus - nicht nur online - Mercedes Bunz Das Web braucht Spezialisten, keine Generalisten. Zur Notwendigkeit einer webspezifischen Professionalisierung in der Ausbildung - Petra Werner Online-Forschung im Web - Methodenschwerpunkte im Überblick - Simone Fühles-Ubach Im Spiel der Moden? - Das Web in der Wirtschaft, die Wirtschaft im Web - Jörg Hoewner Medizin im Web - Martina Waitz Das Web und das Medienrecht - Bernd Holznagel / Thorsten Ricke Suchmaschinenforschung im Kontext einer zukünftigen Webwissenschaft - Dirk Lewandowski
    RSWK
    World Wide Web / Medienwissenschaft / Kommunikationswissenschaft
    Web Site / Gestaltung
    Subject
    World Wide Web / Medienwissenschaft / Kommunikationswissenschaft
    Web Site / Gestaltung
  11. Brügger, N.: ¬The archived Web : doing history in the digital age (2018) 0.02
    0.017107364 = product of:
      0.119751535 = sum of:
        0.051419973 = weight(_text_:wide in 5679) [ClassicSimilarity], result of:
          0.051419973 = score(doc=5679,freq=8.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.3916274 = fieldWeight in 5679, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=5679)
        0.06833156 = weight(_text_:web in 5679) [ClassicSimilarity], result of:
          0.06833156 = score(doc=5679,freq=48.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.70657074 = fieldWeight in 5679, product of:
              6.928203 = tf(freq=48.0), with freq of:
                48.0 = termFreq=48.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=5679)
      0.14285715 = coord(2/14)
    
    Abstract
    An original methodological framework for approaching the archived web, both as a source and as an object of study in its own right. As life continues to move online, the web becomes increasingly important as a source for understanding the past. But historians have yet to formulate a methodology for approaching the archived web as a source of study. How should the history of the present be written? In this book, Niels Brügger offers an original methodological framework for approaching the web of the past, both as a source and as an object of study in its own right. While many studies of the web focus solely on its use and users, Brügger approaches the archived web as a semiotic, textual system in order to offer the first book-length treatment of its scholarly use. While the various forms of the archived web can challenge researchers' interactions with it, they also present a range of possibilities for interpretation. The Archived Web identifies characteristics of the online web that are significant now for scholars, investigates how the online web became the archived web, and explores how the particular digitality of the archived web can affect a historian's research process. Brügger offers suggestions for how to translate traditional historiographic methods for the study of the archived web, focusing on provenance, creating an overview of the archived material, evaluating versions, and citing the material. The Archived Web lays the foundations for doing web history in the digital age, offering important and timely guidance for today's media scholars and tomorrow's historians.
    Content
    "How will the history of the present be written? As life continues to move online, the web becomes ever more important for an understanding of the past. This book offers an original theoretical framework for approaching the web of the past, both as a source and as an object of study in its own right"
    LCSH
    Web archives / Social aspects
    World Wide Web / History
    RSWK
    World Wide Web / Archivierung / Digitalisierung / Geschichtswissenschaft / Geschichtsschreibung
    Subject
    World Wide Web / Archivierung / Digitalisierung / Geschichtswissenschaft / Geschichtsschreibung
    Web archives / Social aspects
    World Wide Web / History
  12. Facets of Facebook : use and users (2016) 0.02
    0.016992439 = product of:
      0.07929805 = sum of:
        0.017435152 = weight(_text_:web in 4231) [ClassicSimilarity], result of:
          0.017435152 = score(doc=4231,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.18028519 = fieldWeight in 4231, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4231)
        0.05177324 = weight(_text_:elektronische in 4231) [ClassicSimilarity], result of:
          0.05177324 = score(doc=4231,freq=4.0), product of:
            0.14013545 = queryWeight, product of:
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.029633347 = queryNorm
            0.3694514 = fieldWeight in 4231, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4231)
        0.010089659 = weight(_text_:information in 4231) [ClassicSimilarity], result of:
          0.010089659 = score(doc=4231,freq=8.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.19395474 = fieldWeight in 4231, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4231)
      0.21428572 = coord(3/14)
    
    Abstract
    The debate on Facebook raises questions about the use and users of this information service. This collected volume gathers a broad spectrum of social science and information science articles about Facebook.Facebook has many facets, and we just look forward above all to the use and users. The facet of users has sub-facets, such as different age, sex, and culture. The facet of use consists of sub-facets of privacy behavior after the Snowden affair, dealing with friends, unfriending and becoming unfriended on Facebook, and possible Facebook addiction. We also consider Facebook as a source for local temporary history and respond to acceptance and quality perceptions of this social network service, as well. This book brings together all the contributions of research facets on Facebook. It is a much needed compilation written by leading scholars in the fields of investigation of the impact of Web 2.0. The target groups are social media researchers, information scientists and social scientists, and also all those who take to Facebook topics.
    BK
    05.38 (Neue elektronische Medien) <Kommunikationswissenschaft>
    Classification
    05.38 (Neue elektronische Medien) <Kommunikationswissenschaft>
    Series
    Knowledge and information
  13. Bhattacharya, S.; Yang, C.; Srinivasan, P.; Boynton, B.: Perceptions of presidential candidates' personalities in twitter (2016) 0.02
    0.016815087 = product of:
      0.058852803 = sum of:
        0.032137483 = weight(_text_:wide in 2635) [ClassicSimilarity], result of:
          0.032137483 = score(doc=2635,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.24476713 = fieldWeight in 2635, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2635)
        0.0050448296 = weight(_text_:information in 2635) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=2635,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 2635, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2635)
        0.014978974 = weight(_text_:retrieval in 2635) [ClassicSimilarity], result of:
          0.014978974 = score(doc=2635,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.16710453 = fieldWeight in 2635, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2635)
        0.0066915164 = product of:
          0.020074548 = sum of:
            0.020074548 = weight(_text_:22 in 2635) [ClassicSimilarity], result of:
              0.020074548 = score(doc=2635,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.19345059 = fieldWeight in 2635, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2635)
          0.33333334 = coord(1/3)
      0.2857143 = coord(4/14)
    
    Abstract
    Political sentiment analysis using social media, especially Twitter, has attracted wide interest in recent years. In such research, opinions about politicians are typically divided into positive, negative, or neutral. In our research, the goal is to mine political opinion from social media at a higher resolution by assessing statements of opinion related to the personality traits of politicians; this is an angle that has not yet been considered in social media research. A second goal is to contribute a novel retrieval-based approach for tracking public perception of personality using Gough and Heilbrun's Adjective Check List (ACL) of 110 terms describing key traits. This is in contrast to the typical lexical and machine-learning approaches used in sentiment analysis. High-precision search templates developed from the ACL were run on an 18-month span of Twitter posts mentioning Obama and Romney and these retrieved more than half a million tweets. For example, the results indicated that Romney was perceived as more of an achiever and Obama was perceived as somewhat more friendly. The traits were also aggregated into 14 broad personality dimensions. For example, Obama rated far higher than Romney on the Moderation dimension and lower on the Machiavellianism dimension. The temporal variability of such perceptions was explored.
    Date
    22. 1.2016 11:25:47
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.2, S.249-267
  14. Griesbaum, J.: Social Web : Überblick Einordnung informationswissenschaftliche Perspektiven (2010) 0.02
    0.015711334 = product of:
      0.073319554 = sum of:
        0.046783425 = weight(_text_:web in 1595) [ClassicSimilarity], result of:
          0.046783425 = score(doc=1595,freq=10.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.48375595 = fieldWeight in 1595, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1595)
        0.00856136 = weight(_text_:information in 1595) [ClassicSimilarity], result of:
          0.00856136 = score(doc=1595,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.16457605 = fieldWeight in 1595, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1595)
        0.01797477 = weight(_text_:retrieval in 1595) [ClassicSimilarity], result of:
          0.01797477 = score(doc=1595,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.20052543 = fieldWeight in 1595, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=1595)
      0.21428572 = coord(3/14)
    
    Abstract
    Der Beitrag behandelt informationswissenschaftliche Perspektiven des Social Web. Hierzu wird zunächst anhand technologischer und sozialer Entwicklungstendenzen des Internets eine begriffliche Annäherung vorgenommen und die sich daraus ergebenden Phänomene mittels einer exemplarischen Darstellung wichtiger Dienste und Anwendungen veranschaulicht. Darauf aufsetzend wird das Social Web aus gesellschaftlicher Perspektive als eine globale Architektur der Partizipation eingeordnet, die in langfristiger Sicht das Potential für strukturelle Umbrüche in vielfältigen Bereichen und Handlungsfeldern in sich birgt. Dabei lassen sich aus informationswissenschaftlicher Perspektive insbesondere Auswirkungen auf die Ausprägung individueller und kollektiver Informations-, Wissens- und Kommunikationsprozesse als für die Disziplin relevante Aspekte begreifen. So bereichert das Social Web zentrale Themenfelder wie das Information Retrieval, die Mensch-Maschine-Interaktion oder das Wissensmanagement um neuartige Facetten. Zugleich werden neue Forschungsfelder virulent. Der Artikel skizziert beispielhaft einige dieser Aspekte, die derzeit in Hildesheim, insbesondere mit der neu geschaffenen Juniorprofessur "Social Networks and Collaborative Media", zu einer Erweiterung des informationswissenschaftlichen Lehr- und Forschungsportfolios führen. Ziel des Beitrags ist es zu verdeutlichen, dass die derzeitigen Entwicklungstendenzen des Internets die Bedeutung der Informationswissenschaft als wichtige zukunftsorientierte Lehr- und Forschungsdisziplin unterstreichen und zugleich Chancen und Bedarf für eine offensive Profilierung der Disziplin schaffen.
    Object
    Web 2.0
    Source
    Information - Wissenschaft und Praxis. 61(2010) H.6/7, S.349-360
  15. Egbert, J.; Biber, D.; Davies, M.: Developing a bottom-up, user-based method of web register classification (2015) 0.02
    0.015698675 = product of:
      0.073260486 = sum of:
        0.05917687 = weight(_text_:web in 2158) [ClassicSimilarity], result of:
          0.05917687 = score(doc=2158,freq=16.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.6119082 = fieldWeight in 2158, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2158)
        0.0060537956 = weight(_text_:information in 2158) [ClassicSimilarity], result of:
          0.0060537956 = score(doc=2158,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.116372846 = fieldWeight in 2158, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2158)
        0.008029819 = product of:
          0.024089456 = sum of:
            0.024089456 = weight(_text_:22 in 2158) [ClassicSimilarity], result of:
              0.024089456 = score(doc=2158,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.23214069 = fieldWeight in 2158, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2158)
          0.33333334 = coord(1/3)
      0.21428572 = coord(3/14)
    
    Abstract
    This paper introduces a project to develop a reliable, cost-effective method for classifying Internet texts into register categories, and apply that approach to the analysis of a large corpus of web documents. To date, the project has proceeded in 2 key phases. First, we developed a bottom-up method for web register classification, asking end users of the web to utilize a decision-tree survey to code relevant situational characteristics of web documents, resulting in a bottom-up identification of register and subregister categories. We present details regarding the development and testing of this method through a series of 10 pilot studies. Then, in the second phase of our project we applied this procedure to a corpus of 53,000 web documents. An analysis of the results demonstrates the effectiveness of these methods for web register classification and provides a preliminary description of the types and distribution of registers on the web.
    Date
    4. 8.2015 19:22:04
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.9, S.1817-1831
  16. Fu, T.; Abbasi, A.; Chen, H.: ¬A focused crawler for Dark Web forums (2010) 0.02
    0.015499127 = product of:
      0.07232926 = sum of:
        0.052305456 = weight(_text_:web in 3471) [ClassicSimilarity], result of:
          0.052305456 = score(doc=3471,freq=18.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.5408555 = fieldWeight in 3471, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3471)
        0.0050448296 = weight(_text_:information in 3471) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=3471,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 3471, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3471)
        0.014978974 = weight(_text_:retrieval in 3471) [ClassicSimilarity], result of:
          0.014978974 = score(doc=3471,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.16710453 = fieldWeight in 3471, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3471)
      0.21428572 = coord(3/14)
    
    Abstract
    The unprecedented growth of the Internet has given rise to the Dark Web, the problematic facet of the Web associated with cybercrime, hate, and extremism. Despite the need for tools to collect and analyze Dark Web forums, the covert nature of this part of the Internet makes traditional Web crawling techniques insufficient for capturing such content. In this study, we propose a novel crawling system designed to collect Dark Web forum content. The system uses a human-assisted accessibility approach to gain access to Dark Web forums. Several URL ordering features and techniques enable efficient extraction of forum postings. The system also includes an incremental crawler coupled with a recall-improvement mechanism intended to facilitate enhanced retrieval and updating of collected content. Experiments conducted to evaluate the effectiveness of the human-assisted accessibility approach and the recall-improvement-based, incremental-update procedure yielded favorable results. The human-assisted approach significantly improved access to Dark Web forums while the incremental crawler with recall improvement also outperformed standard periodic- and incremental-update approaches. Using the system, we were able to collect over 100 Dark Web forums from three regions. A case study encompassing link and content analysis of collected forums was used to illustrate the value and importance of gathering and analyzing content from such online communities.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.6, S.1213-1231
  17. Berners-Lee, T.: ¬The Father of the Web will give the Internet back to the people (2018) 0.02
    0.015339477 = product of:
      0.071584225 = sum of:
        0.036359414 = weight(_text_:wide in 4495) [ClassicSimilarity], result of:
          0.036359414 = score(doc=4495,freq=4.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.2769224 = fieldWeight in 4495, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=4495)
        0.03118895 = weight(_text_:web in 4495) [ClassicSimilarity], result of:
          0.03118895 = score(doc=4495,freq=10.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.32250395 = fieldWeight in 4495, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=4495)
        0.0040358636 = weight(_text_:information in 4495) [ClassicSimilarity], result of:
          0.0040358636 = score(doc=4495,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.0775819 = fieldWeight in 4495, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=4495)
      0.21428572 = coord(3/14)
    
    Content
    "This week, Berners-Lee will launch Inrupt ( https://www.password-online.de/?email_id=571&user_id=1045&urlpassed=aHR0cHM6Ly93d3cuaW5ydXB0LmNvbQ&controller=stats&action=analyse&wysija-page=1&wysijap=subscriptions ), a startup that he has been building, in stealth mode, for the past nine months. For years now, Berners-Lee and other internet activists have been dreaming of a digital utopia where individuals control their own data and the internet remains free and open. But for Berners-Lee, the time for dreaming is over. "We have to do it now," he says, displaying an intensity and urgency that is uncharacteristic for this soft-spoken academic. "It's a historical moment." If all goes as planned, Inrupt will be to Solid what Netscape once was for many first-time users of the web: an easy way in. . On his screen, there is a simple-looking web page with tabs across the top: Tim's to-do list, his calendar, chats, address book. He built this app-one of the first on Solid for his personal use. It is simple, spare. In fact, it's so plain that, at first glance, it's hard to see its significance. But to Berners-Lee, this is where the revolution begins. The app, using Solid's decentralized technology, allows Berners-Lee to access all of his data seamlessly-his calendar, his music library, videos, chat, research. It's like a mashup of Google Drive, Microsoft Outlook, Slack, Spotify, and WhatsApp. The difference here is that, on Solid, all the information is under his control. In: Exclusive: Tim Berners-Lee tells us his radical new plan to upend the World Wide Web ( https://www.password-online.de/?email_id=571&user_id=1045&urlpassed=aHR0cHM6Ly93d3cuZmFzdGNvbXBhbnkuY29tLzkwMjQzOTM2L2V4Y2x1c2l2ZS10aW0tYmVybmVycy1sZWUtdGVsbHMtdXMtaGlzLXJhZGljYWwtbmV3LXBsYW4tdG8tdXBlbmQtdGhlLXdvcmxkLXdpZGUtd2Vi&controller=stats&action=analyse&wysija-page=1&wysijap=subscriptions ), in: https://www.fastcompany.com/90243936/exclusive-tim-berners-lee-tells-us-his-radical-new-plan-to-upend-the-world-wide-web ( https://www.password-online.de/?email_id=571&user_id=1045&urlpassed=aHR0cHM6Ly93d3cuZmFzdGNvbXBhbnkuY29tLzkwMjQzOTM2L2V4Y2x1c2l2ZS10aW0tYmVybmVycy1sZWUtdGVsbHMtdXMtaGlzLXJhZGljYWwtbmV3LXBsYW4tdG8tdXBlbmQtdGhlLXdvcmxkLXdpZGUtd2Vi&controller=stats&action=analyse&wysija-page=1&wysijap=subscriptions)."
  18. Rogers, R.: Digital methods (2013) 0.01
    0.014801296 = product of:
      0.10360907 = sum of:
        0.051419973 = weight(_text_:wide in 2354) [ClassicSimilarity], result of:
          0.051419973 = score(doc=2354,freq=8.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.3916274 = fieldWeight in 2354, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=2354)
        0.052189093 = weight(_text_:web in 2354) [ClassicSimilarity], result of:
          0.052189093 = score(doc=2354,freq=28.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.5396523 = fieldWeight in 2354, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=2354)
      0.14285715 = coord(2/14)
    
    Abstract
    In Digital Methods, Richard Rogers proposes a methodological outlook for social and cultural scholarly research on the Web that seeks to move Internet research beyond the study of online culture. It is not a toolkit for Internet research, or operating instructions for a software package; it deals with broader questions. How can we study social media to learn something about society rather than about social media use? How can hyperlinks reveal not just the value of a Web site but the politics of association? Rogers proposes repurposing Web-native techniques for research into cultural change and societal conditions. We can learn to reapply such "methods of the medium" as crawling and crowd sourcing, PageRank and similar algorithms, tag clouds and other visualizations; we can learn how they handle hits, likes, tags, date stamps, and other Web-native objects. By "thinking along" with devices and the objects they handle, digital research methods can follow the evolving methods of the medium. Rogers uses this new methodological outlook to examine the findings of inquiries into 9/11 search results, the recognition of climate change skeptics by climate-change-related Web sites, the events surrounding the Srebrenica massacre according to Dutch, Serbian, Bosnian, and Croatian Wikipedias, presidential candidates' social media "friends," and the censorship of the Iranian Web. With Digital Methods, Rogers introduces a new vision and method for Internet research and at the same time applies them to the Web's objects of study, from tiny particles (hyperlinks) to large masses (social media).
    Content
    The end of the virtual : digital methods -- The link and the politics of Web space -- The website as archived object -- Googlization and the inculpable engine -- Search as research -- National Web studies -- Social media and post-demographics -- Wikipedia as cultural reference -- After cyberspace : big data, small data.
    LCSH
    Web search engines
    World Wide Web / Research
    RSWK
    Internet / Recherche / World Wide Web 2.0
    Subject
    Internet / Recherche / World Wide Web 2.0
    Web search engines
    World Wide Web / Research
  19. Luo, Z.; Yu, Y.; Osborne, M.; Wang, T.: Structuring tweets for improving Twitter search (2015) 0.01
    0.014390454 = product of:
      0.06715545 = sum of:
        0.017435152 = weight(_text_:web in 2335) [ClassicSimilarity], result of:
          0.017435152 = score(doc=2335,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.18028519 = fieldWeight in 2335, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2335)
        0.010089659 = weight(_text_:information in 2335) [ClassicSimilarity], result of:
          0.010089659 = score(doc=2335,freq=8.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.19395474 = fieldWeight in 2335, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2335)
        0.03963064 = weight(_text_:retrieval in 2335) [ClassicSimilarity], result of:
          0.03963064 = score(doc=2335,freq=14.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.442117 = fieldWeight in 2335, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2335)
      0.21428572 = coord(3/14)
    
    Abstract
    Spam and wildly varying documents make searching in Twitter challenging. Most Twitter search systems generally treat a Tweet as a plain text when modeling relevance. However, a series of conventions allows users to Tweet in structural ways using a combination of different blocks of texts. These blocks include plain texts, hashtags, links, mentions, etc. Each block encodes a variety of communicative intent and the sequence of these blocks captures changing discourse. Previous work shows that exploiting the structural information can improve the structured documents (e.g., web pages) retrieval. In this study we utilize the structure of Tweets, induced by these blocks, for Twitter retrieval and Twitter opinion retrieval. For Twitter retrieval, a set of features, derived from the blocks of text and their combinations, is used into a learning-to-rank scenario. We show that structuring Tweets can achieve state-of-the-art performance. Our approach does not rely on social media features, but when we do add this additional information, performance improves significantly. For Twitter opinion retrieval, we explore the question of whether structural information derived from the body of Tweets and opinionatedness ratings of Tweets can improve performance. Experimental results show that retrieval using a novel unsupervised opinionatedness feature based on structuring Tweets achieves comparable performance with a supervised method using manually tagged Tweets. Topic-related specific structured Tweet sets are shown to help with query-dependent opinion retrieval.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.12, S.2522-2539
  20. Barrio, P.; Gravano, L.: Sampling strategies for information extraction over the deep web (2017) 0.01
    0.0136875175 = product of:
      0.06387508 = sum of:
        0.03416578 = weight(_text_:web in 3412) [ClassicSimilarity], result of:
          0.03416578 = score(doc=3412,freq=12.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.35328537 = fieldWeight in 3412, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=3412)
        0.012762521 = weight(_text_:information in 3412) [ClassicSimilarity], result of:
          0.012762521 = score(doc=3412,freq=20.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.2453355 = fieldWeight in 3412, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=3412)
        0.016946774 = weight(_text_:retrieval in 3412) [ClassicSimilarity], result of:
          0.016946774 = score(doc=3412,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.18905719 = fieldWeight in 3412, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=3412)
      0.21428572 = coord(3/14)
    
    Abstract
    Information extraction systems discover structured information in natural language text. Having information in structured form enables much richer querying and data mining than possible over the natural language text. However, information extraction is a computationally expensive task, and hence improving the efficiency of the extraction process over large text collections is of critical interest. In this paper, we focus on an especially valuable family of text collections, namely, the so-called deep-web text collections, whose contents are not crawlable and are only available via querying. Important steps for efficient information extraction over deep-web text collections (e.g., selecting the collections on which to focus the extraction effort, based on their contents; or learning which documents within these collections-and in which order-to process, based on their words and phrases) require having a representative document sample from each collection. These document samples have to be collected by querying the deep-web text collections, an expensive process that renders impractical the existing sampling approaches developed for other data scenarios. In this paper, we systematically study the space of query-based document sampling techniques for information extraction over the deep web. Specifically, we consider (i) alternative query execution schedules, which vary on how they account for the query effectiveness, and (ii) alternative document retrieval and processing schedules, which vary on how they distribute the extraction effort over documents. We report the results of the first large-scale experimental evaluation of sampling techniques for information extraction over the deep web. Our results show the merits and limitations of the alternative query execution and document retrieval and processing strategies, and provide a roadmap for addressing this critically important building block for efficient, scalable information extraction.
    Source
    Information processing and management. 53(2017) no.2, S.309-331

Languages

  • e 159
  • d 61

Types

  • a 191
  • m 25
  • el 13
  • s 3
  • x 1
  • More… Less…

Subjects

Classifications