Diese Datenbank enthält über 40.000 Dokumente zu Themen aus den Bereichen Formalerschließung – Inhaltserschließung – Information Retrieval.
© 2015 W. Gödert, TH Köln, Institut für Informationswissenschaft / Powered by litecat, BIS Oldenburg (Stand: 03. März 2020)
1Alphabet enttäuscht Anleger : Ein Kommentar von Michael Konrad.
In: ¬Die Rheinpfalz am Sonntag. Nr. 30 vom 05.02.2020, S.Wirtschaft.
Abstract: Der Google-Mutterkonzern gewährt zum ersten Mal richtig Einblick ins Geschäft der Videoplattform Youtube und lüftet damit ein großes Geheimnis. An der Börse kommt der Quartalsbericht aber schlecht an.
Objekt: Google ; Alphabet ; Youtube
2Lewandowski, D. ; Sünkler, S.: What does Google recommend when you want to compare insurance offerings?.
In: Aslib journal of information management. 71(2019) no.3, S.310-324.
Abstract: Purpose The purpose of this paper is to describe a new method to improve the analysis of search engine results by considering the provider level as well as the domain level. This approach is tested by conducting a study using queries on the topic of insurance comparisons. Design/methodology/approach The authors conducted an empirical study that analyses the results of search queries aimed at comparing insurance companies. The authors used a self-developed software system that automatically queries commercial search engines and automatically extracts the content of the returned result pages for further data analysis. The data analysis was carried out using the KNIME Analytics Platform. Findings Google's top search results are served by only a few providers that frequently appear in these results. The authors show that some providers operate several domains on the same topic and that these domains appear for the same queries in the result lists. Research limitations/implications The authors demonstrate the feasibility of this approach and draw conclusions for further investigations from the empirical study. However, the study is a limited use case based on a limited number of search queries. Originality/value The proposed method allows large-scale analysis of the composition of the top results from commercial search engines. It allows using valid empirical data to determine what users actually see on the search engine result pages.
Inhalt: Vgl.: https://doi.org/10.1108/AJIM-07-2018-0172.
Anmerkung: Beitrag in einem Special Issue: Information Science in the German-speaking Countries
3Luo, M.M. ; Nahl, D.: Let's Google : uncertainty and bilingual search.
In: Journal of the Association for Information Science and Technology. 70(2019) no.9, S.1014-1025.
Abstract: This study applies Kuhlthau's Information Search Process stage (ISP) model to understand bilingual users' Internet search experience. We conduct a quasi-field experiment with 30 bilingual searchers and the results suggested that the ISP model was applicable in studying searchers' information retrieval behavior in search tasks. The ISP model was applicable in studying searchers' information retrieval behavior in simple tasks. However, searchers' emotional responses differed from those of the ISP model for a complex task. By testing searchers using different search strategies, the results suggested that search engines with multilanguage search functions provide an advantage for bilingual searchers in the Internet's multilingual environment. The findings showed that when searchers used a search engine as a tool for problem solving, they might experience different feelings in each ISP stage than in searching for information for a term paper using a library. The results echo other research findings that indicate that information seeking is a multifaceted phenomenon.
Inhalt: Vgl.: https://onlinelibrary.wiley.com/doi/10.1002/asi.24174.
Themenfeld: Suchmaschinen ; Multilinguale Probleme
4Suranofsky, M. ; McColl, L.: a Google sheets add-on that uses the WorldCat search API : MatchMarc.
In: Code4Lib journal. Issue 46(2019), [http://journal.code4lib.org].
Abstract: Lehigh University Libraries has developed a new tool for querying WorldCat using the WorldCat Search API. The tool is a Google Sheet Add-on and is available now via the Google Sheets Add-ons menu under the name "MatchMarc." The add-on is easily customizable, with no knowledge of coding needed. The tool will return a single "best" OCLC record number, and its bibliographic information for a given ISBN or LCCN, allowing the user to set up and define "best." Because all of the information, the input, the criteria, and the results exist in the Google Sheets environment, efficient workflows can be developed from this flexible starting point. This article will discuss the development of the add-on, how it works, and future plans for development.
Inhalt: Vgl.: https://journal.code4lib.org/articles/14813.
Objekt: Google ; MARC ; WorldCat ; MatchMarc
5Farney, T.: using Google Tag Manager to share code : Designing shareable tags.
In: Code4Lib journal. Issue 46(2019), [http://journal.code4lib.org].
Inhalt: Vgl.: https://journal.code4lib.org/articles/14853.
Objekt: Google Tag Manager
6Hodges, D.W. ; Schlottmann, K.: better archival migration outcomes with Python and the Google Sheets API : Reporting from the archives.
In: Code4Lib journal. Issue 46(2019), [http://journal.code4lib.org].
Abstract: Columbia University Libraries recently embarked on a multi-phase project to migrate nearly 4,000 records describing over 70,000 linear feet of archival material from disparate sources and formats into ArchivesSpace. This paper discusses tools and methods brought to bear in Phase 2 of this project, which required us to look closely at how to integrate a large number of legacy finding aids into the new system and merge descriptive data that had diverged in myriad ways. Using Python, XSLT, and a widely available if underappreciated resource-the Google Sheets API-archival and technical library staff devised ways to efficiently report data from different sources, and present it in an accessible, user-friendly way,. Responses were then fed back into automated data remediation processes to keep the migration project on track and minimize manual intervention. The scripts and processes developed proved very effective, and moreover, show promise well beyond the ArchivesSpace migration. This paper describes the Python/XSLT/Sheets API processes developed and how they opened a path to move beyond CSV-based reporting with flexible, ad-hoc data interfaces easily adaptable to meet a variety of purposes.
Inhalt: Vgl.: https://journal.code4lib.org/articles/14871.
Objekt: Google Sheets API ; Python
7Lewandowski, D. ; Kerkmann, F. ; Rümmele, S. ; Sünkler, S.: ¬An empirical investigation on search engine ad disclosure.
In: Journal of the Association for Information Science and Technology. 69(2018) no.3, S.420-437.
Abstract: This representative study of German search engine users (N?=?1,000) focuses on the ability of users to distinguish between organic results and advertisements on Google results pages. We combine questions about Google's business with task-based studies in which users were asked to distinguish between ads and organic results in screenshots of results pages. We find that only a small percentage of users can reliably distinguish between ads and organic results, and that user knowledge of Google's business model is very limited. We conclude that ads are insufficiently labelled as such, and that many users may click on ads assuming that they are selecting organic results.
Inhalt: Vgl.: http://onlinelibrary.wiley.com/doi/10.1002/asi.23963/full.
Themenfeld: Suchmaschinen ; Benutzerstudien
8Abad-García, M.-F. ; González-Teruel, A. ; González-Llinares, J.: Effectiveness of OpenAIRE, BASE, Recolecta, and Google Scholar at finding spanish articles in repositories.
In: Journal of the Association for Information Science and Technology. 69(2018) no.4, S.619-622.
Abstract: This paper explores the usefulness of OpenAIRE, BASE, Recolecta, and Google Scholar (GS) for evaluating open access (OA) policies that demand a deposit in a repository. A case study was designed focusing on 762 financed articles with a project of FIS-2012 of the Instituto de Salud Carlos III, the Spanish national health service's main management body for health research. Its finance is therefore subject to the Spanish Government OA mandate. A search was carried out for full-text OA copies of the 762 articles using the four tools being evaluated and with identification of the repository housing these items. Of the 762 articles concerned, 510 OA copies were found of 353 unique articles (46.3%) in 68 repositories. OA copies were found of 81.9% of the articles in PubMed Central and copies of 49.5% of the articles in an institutional repository (IR). BASE and GS identified 93.5% of the articles and OpenAIRE 86.7%. Recolecta identified just 62.2% of the articles deposited in a Spanish IR. BASE achieved the greatest success, by locating copies deposited in IR, while GS found those deposited in disciplinary repositories. None of the tools identified copies of all the articles, so they need to be used in a complementary way when evaluating OA policies.
Inhalt: Vgl.: https://onlinelibrary.wiley.com/doi/abs/10.1002/asi.23975.
Themenfeld: Informetrie ; Elektronisches Publizieren
Objekt: OpenAIRE ; BASE ; Recolecta ; Google Scholar
9Hurz, S.: Google verfolgt Nutzer, auch wenn sie explizit widersprechen.[14. August 2018].
Abstract: Wenn Google-Nutzer den Standortverlauf ausschalten, speichert das Unternehmen trotzdem Bewegungsdaten. Betroffen sind mehr als zwei Milliarden Menschen, die Android-Smartphones oder iPhones mit Google-Diensten verwenden. Wer das Tracking verhindern will, muss die "Web- und App-Aktivitäten" komplett deaktivieren.
10Gantman, E.R. ; Dabós, M.P.: Research output and impact of the fields of management, economics, and sociology in Spain and France : an analysis using Google Scholar and Scopus.
In: Journal of the Association for Information Science and Technology. 69(2018) no.8, S.1054-1066.
Abstract: Because of a greater coverage of documentary sources in many languages that is greater than that of traditional bibliographic databases, Google Scholar is an ideal tool for examining the social sciences in non-Anglophone countries. We have therefore used it to study the scholarly output and impact of three scientific disciplines, management, economics, and sociology, in Spain and France, comparing some of the results with those retrieved with Scopus. Our findings show that scientific articles are the predominant form of scholarly communication in Google Scholar for our selected fields and countries. In addition, our results indicate that in Google Scholar the vernacular languages of each country are more used than English in all cases, but economics in France. The opposite occurs in Scopus, except for the case of sociology articles in France We also show that books receive on average more citations than other published documents in Google Scholar. Finally, we demonstrate that publishing in English is associated with greater scholarly impact, except for the case of France in Google Scholar for articles in sociology and books in the three fields.
Inhalt: Vgl.: https://onlinelibrary.wiley.com/doi/abs/10.1002/asi.24020.
Wissenschaftsfach: Wirtschaftswissenschaften ; Soziologie
Objekt: Google Scholar ; Scopus
Land/Ort: F ; ES
11Abdelkareem, M.A.A.: In terms of publication index, what indicator is the best for researchers indexing, Google Scholar, Scopus, Clarivate or others?.
Abstract: I believe that Google Scholar is the most popular academic indexing way for researchers and citations. However, some other indexing institutions may be more professional than Google Scholar but not as popular as Google Scholar. Other indexing websites like Scopus and Clarivate are providing more statistical figures for scholars, institutions or even journals. On account of publication citations, always Google Scholar shows higher citations for a paper than other indexing websites since Google Scholar consider most of the publication platforms so he can easily count the citations. While other databases just consider the citations come from those journals that are already indexed in their database
Themenfeld: Retrievalalgorithmen ; Informetrie
Objekt: Google Scholar ; Scopus ; Clarivate
12Bilal, D. ; Gwizdka, J.: Children's query types and reformulations in Google search.
In: Information processing and management. 54(2018) no.6, S.1022-1041.
Abstract: We investigated the searching behaviors of twenty-four children in grades 6, 7, and 8 (ages 11-13) in finding information on three types of search tasks in Google. Children conducted 72 search sessions and issued 150 queries. Children's phrase- and question-like queries combined were much more prevalent than keyword queries (70% vs. 30%, respectively). Fifty two percent of the queries were reformulations (33 sessions). We classified children's query reformulation types into five classes based on the taxonomy by Liu et al. (2010). We found that most query reformulations were by Substitution and Specialization, and that children hardly repeated queries. We categorized children's queries by task facets and examined the way they expressed these facets in their query formulations and reformulations. Oldest children tended to target the general topic of search tasks in their queries most frequently, whereas younger children expressed one of the two facets more often. We assessed children's achieved task outcomes using the search task outcomes measure we developed. Children were mostly more successful on the fact-finding and fully self-generated task and partially successful on the research-oriented task. Query type, reformulation type, achieved task outcomes, and expressing task facets varied by task type and grade level. There was no significant effect of query length in words or of the number of queries issued on search task outcomes. The study findings have implications for human intervention, digital literacy, search task literacy, as well as for system intervention to support children's query formulation and reformulation during interaction with Google.
Inhalt: Vgl.: https://doi.org/10.1016/j.ipm.2018.06.008.
Themenfeld: Suchtaktik ; Suchmaschinen
13Beck, C.: Primo gegen Google Scholar : benutzerfreundliches Discovery 10 Jahre später.
In: ABI-Technik. 38(2018) H.4, S.336-343.
Abstract: Wissenschaftliche Bibliotheken stehen seit zehn Jahren vor der Frage, ob sie für die Vermittlung ihrer Bestände Discovery-Systeme oder Internet-Suchmaschinen wie Google Scholar einsetzen sollen. Ein Vergleich des Discovery-Systems Primo des Anbieters Ex Libris mit Google Scholar zeigt, dass Primo eine bessere Usability bietet, indem es summa summarum einfacher zu bedienen ist sowie relevantere und vielfältigere Treffer liefert.
Inhalt: Vgl.: https://doi.org/10.1515/abitech-2018-4007.
Objekt: Primo ; Google Scholar
14Sünkler, S. ; Kerkmann, F. ; Schultheiß, S.: Ok Google . the end of search as we know it : sprachgesteuerte Websuche im Test.
In: B.I.T.online. 21(2018) H.1, S.25-32.
Abstract: Sprachsteuerungssysteme, die den Nutzer auf Zuruf unterstützen, werden im Zuge der Verbreitung von Smartphones und Lautsprechersystemen wie Amazon Echo oder Google Home zunehmend populär. Eine der zentralen Anwendungen dabei stellt die Suche in Websuchmaschinen dar. Wie aber funktioniert "googlen", wenn der Nutzer seine Suchanfrage nicht schreibt, sondern spricht? Dieser Frage ist ein Projektteam der HAW Hamburg nachgegangen und hat im Auftrag der Deutschen Telekom untersucht, wie effektiv, effizient und zufriedenstellend Google Now, Apple Siri, Microsoft Cortana sowie das Amazon Fire OS arbeiten. Ermittelt wurden Stärken und Schwächen der Systeme sowie Erfolgskriterien für eine hohe Gebrauchstauglichkeit. Diese Erkenntnisse mündeten in dem Prototyp einer optimalen Voice Web Search.
Inhalt: Vgl.: https://www.b-i-t-online.de/heft/2018-01-index.php.
Themenfeld: Suchmaschinen ; Computerlinguistik
15Kousha, K. ; Thelwall, M.: Patent citation analysis with Google.
In: Journal of the Association for Information Science and Technology. 68(2017) no.1, S.48-61.
Abstract: Citations from patents to scientific publications provide useful evidence about the commercial impact of academic research, but automatically searchable databases are needed to exploit this connection for large-scale patent citation evaluations. Google covers multiple different international patent office databases but does not index patent citations or allow automatic searches. In response, this article introduces a semiautomatic indirect method via Bing to extract and filter patent citations from Google to academic papers with an overall precision of 98%. The method was evaluated with 322,192 science and engineering Scopus articles from every second year for the period 1996-2012. Although manual Google Patent searches give more results, especially for articles with many patent citations, the difference is not large enough to be a major problem. Within Biomedical Engineering, Biotechnology, and Pharmacology & Pharmaceutics, 7% to 10% of Scopus articles had at least one patent citation but other fields had far fewer, so patent citation analysis is only relevant for a minority of publications. Low but positive correlations between Google Patent citations and Scopus citations across all fields suggest that traditional citation counts cannot substitute for patent citations when evaluating research.
Inhalt: Vgl.: http://onlinelibrary.wiley.com/doi/10.1002/asi.23608/full.
16Lewandowski, D. ; Sünkler, S. ; Kerkmann, F.: Are ads on Google search engine results pages labeled clearly enough? : the influence of knowledge on search ads on users' selection behaviour.
In: Everything changes, everything stays the same? - Understanding information spaces : Proceedings of the 15th International Symposium of Information Science (ISI 2017), Berlin/Germany, 13th - 15th March 2017. Eds.: M. Gäde, V. Trkulja u. V. Petras. vwh-Verlag : Glückstadt, 2017. S.62-75.
(Schriften zur Informationswissenschaft; Bd. 70)
Abstract: In an online experiment using a representative sample of the German online population (n = 1.000), we compare users' selection behaviour on two versions of the same Google search engine results page (SERP), one showing advertisements and organic results, the other showing organic results only. Selection behaviour is analyzed in relation to users' knowledge on Google's business model, on SERP design, and on these users' actual performance in marking advertisements on SERPs correctly. We find that users who were not able to mark ads correctly selected ads significantly more often. This leads to the conclusion that ads need to be labeled more clearly, and that there is a need for more information literacy in search engine users.
Inhalt: Vgl.: http://www.vwh-verlag.de/vwh/wp-content/uploads/2017/03/titelei_isi17.pdf.
Anmerkung: Vgl.: http://searchstudies.org/wp-content/uploads/2017/03/Ads_Labeling_ISI2017_Lewandowski_Suenkler_Kerkmann-93478.pdf.
17Somers, J.: Torching the modern-day library of Alexandria : somewhere at Google there is a database containing 25 million books and nobody is allowed to read them..[20.04.2017].
Abstract: You were going to get one-click access to the full text of nearly every book that's ever been published. Books still in print you'd have to pay for, but everything else-a collection slated to grow larger than the holdings at the Library of Congress, Harvard, the University of Michigan, at any of the great national libraries of Europe-would have been available for free at terminals that were going to be placed in every local library that wanted one. At the terminal you were going to be able to search tens of millions of books and read every page of any book you found. You'd be able to highlight passages and make annotations and share them; for the first time, you'd be able to pinpoint an idea somewhere inside the vastness of the printed record, and send somebody straight to it with a link. Books would become as instantly available, searchable, copy-pasteable-as alive in the digital world-as web pages. It was to be the realization of a long-held dream. "The universal library has been talked about for millennia," Richard Ovenden, the head of Oxford's Bodleian Libraries, has said. "It was possible to think in the Renaissance that you might be able to amass the whole of published knowledge in a single room or a single institution." In the spring of 2011, it seemed we'd amassed it in a terminal small enough to fit on a desk. "This is a watershed event and can serve as a catalyst for the reinvention of education, research, and intellectual life," one eager observer wrote at the time. On March 22 of that year, however, the legal agreement that would have unlocked a century's worth of books and peppered the country with access terminals to a universal library was rejected under Rule 23(e)(2) of the Federal Rules of Civil Procedure by the U.S. District Court for the Southern District of New York. When the library at Alexandria burned it was said to be an "international catastrophe." When the most significant humanities project of our time was dismantled in court, the scholars, archivists, and librarians who'd had a hand in its undoing breathed a sigh of relief, for they believed, at the time, that they had narrowly averted disaster.
Themenfeld: Elektronisches Publizieren
Objekt: Google books
18epd: Kaiserslauterer Forscher untersuchen Google-Suche.
In: ¬Die Rheinpfalz. Nr. 210 vom 09.09.2017.
(Ihr Wochenende / Mediathek / Medienwelten)
Inhalt: "Bei der Suche nach Politikern und Parteien über Suchmaschinen wie Google spielt Personalisierung einem Forschungsprojekt zufolge eine geringere Rolle als bisher angenommen. Bei der Eingabe von Politikernamen erhalten verschiedene Nutzer größtenteils die gleichen Ergebnisse angezeigt, lautet ein gestern veröffentlichtes Zwischenergebnis einer Analyse im Auftrag der Landesmedienanstalten. Die Ergebnisse stammen aus dem Forschungsprojekt "#Datenspende: Google und die Bundestagswahl2017" der Initiative AIgorithmWatch und der Technischen Universität Kaiserslautern. Im Durchschnitt erhalten zwei unterschiedliche Nutzer demnach bei insgesamt neun Suchergebnissen sieben bis acht identische Treffer, wenn sie mit Google nach Spitzenkandidaten der Parteien im Bundestagswahlkampf suchen. Die Suchergebnisse zu Parteien unterscheiden sich allerdings stärker. Bei neun Suchanfragen gebe es hier nur fünf bis sechs gemeinsame Suchergebnisse, fanden die Wissenschaftler heraus. Die Informatikprofessorin Katharina Zweig von der TU Kaiserslautern zeigte sich überrascht, dass die Suchergebisse verschiedener Nutzer sich so wenig unterscheiden. "Das könnte allerdings morgen schon wieder anders aussehen", warnte sie, Die Studie beweise erstmals, dass es grundsätzlich möglich sei, Algorithmen von Intermediären wie Suchmaschinen im Verdachtsfall nachvollziehbar zu machen. Den Ergebnissen zufolge gibt es immer wieder kleine Nutzergruppen mit stark abweichenden Ergebnislisten. Eine abschließende, inhaltliche Bewertung stehe noch aus. Für das Projekt haben nach Angaben der Medienanstalt bisher fast 4000 freiwillige Nutzer ein von den Forschern programmiertes Plug-ln auf ihrem Computer- installiert. Bisher seien damitdrei Millionen gespendete Datensätze gespeichert worden. Das Projekt wird finanziert von den Landesmedienanstalten Bayern, Berlin-Brandenburg, Hessen, Rheinland-Pfalz, Saarland und Sachsen." Vgl. auch: https://www.swr.de/swraktuell/rp/kaiserslautern/forschung-in-kaiserslautern-beeinflusst-google-die-bundestagswahl/-/id=1632/did=20110680/nid=1632/1mohmie/index.html. https://www.uni-kl.de/aktuelles/news/news/detail/News/aufruf-zur-datenspende-welche-nachrichten-zeigt-die-suchmaschine-google-zur-bundestagswahl-an/.
19Zhao, Y. ; Ma, F. ; Xia, X.: Evaluating the coverage of entities in knowledge graphs behind general web search engines : Poster.
In: http://www.iskocus.org/NASKO2017papers/NASKO2017_paper_10.pdf [NASKO 2017, June 15-16, 2017, Champaign, IL, USA].
Abstract: Web search engines, such as Google and Bing, are constantly employing results from knowledge organization and various visualization features to improve their search services. Knowledge graph, a large repository of structured knowledge represented by formal languages such as RDF (Resource Description Framework), is used to support entity search feature of Google and Bing (Demartini, 2016). When a user searchs for an entity, such as a person, an organization, or a place in Google or Bing, it is likely that a knowledge cardwill be presented on the right side bar of the search engine result pages (SERPs). For example, when a user searches the entity Benedict Cumberbatch on Google, the knowledge card will show the basic structured information about this person, including his date of birth, height, spouse, parents, and his movies, etc. The knowledge card, which is used to present the result of entity search, is generated from knowledge graphs. Therefore, the quality of knowledge graphs is essential to the performance of entity search. However, studies on the quality of knowledge graphs from the angle of entity coverage are scant in the literature. This study aims to investigate the coverage of entities of knowledge graphs behind Google and Bing.
Inhalt: Beitrag bei: NASKO 2017: Visualizing Knowledge Organization: Bringing Focus to Abstract Realities. The sixth North American Symposium on Knowledge Organization (NASKO 2017), June 15-16, 2017, in Champaign, IL, USA.
Objekt: Google ; Bing
20Tetzchner, J. von: As a monopoly in search and advertising Google is not able to resist the misuse of power : is the Internet turning into a battlefield of propaganda? How Google should be regulated.
In: Open Password. 2017, Nr.266 vom 13.10.2017 [http://www.password-online.de/?wysija-page=1&controller=email&action=view&email_id=339&wysijap=subscriptions&user_id=1045].
Abstract: Jon von Tetzchner entwickelte die Browser Opera und Vivaldi. Er ist Mitgründer und CEO von Vivaldi Technologies. Zuletzt wandelte er sich vom Google-Enthusiasten zum Google-Kritiker. Im Interview mit Open Password stellt er seine Positionen dar. Der gebürtige Isländer arbeitete lange in Norwegen und residiert neuerdings in der Nähe von Boston.
Inhalt: "Let us start with your positive experiences with Google. I have known Google longer than most. At Opera, we were the first to add their search into the browser interface, enabling it directly from the search box and the address field. At that time, Google was an up-and-coming geeky company. I remember vividly meeting with Google's co-founder Larry Page, his relaxed dress code and his love for the Danger device, which he played with throughout our meeting. Later, I met with the other co-founder of Google, Sergey Brin, and got positive vibes. My first impression of Google was that it was a likeable company. Our cooperation with Google was a good one. Integrating their search into Opera helped us deliver a better service to our users and generated revenue that paid the bills. We helped Google grow, along with others that followed in our footsteps and integrated Google search into their browsers. Then the picture for you and for opera darkened. Yes, then things changed. Google increased their proximity with the Mozilla foundation. They also introduced new services such as Google Docs. These services were great, gained quick popularity, but also exposed the darker side of Google. Not only were these services made to be incompatible with Opera, but also encouraged users to switch their browsers. I brought this up with Sergey Brin, in vain. For millions of Opera users to be able to access these services, we had to hide our browser's identity. The browser sniffing situation only worsened after Google started building their own browser, Chrome. ... ; How should Google be regulated? We should limit the amount of information that is being collected. In particular we should look at information that is being collected across sites. It should not be legal to combine data from multiple sites and services. The fact that these sites and services are using the same underlying technology does not change the fact that the user's dealings is with a site at a time and each site should not have the right to share the data with others. I believe this the cornerstone of laws in many countries today, but these laws need to be enforced. Data about us is ours alone and it should not be possible to sell it. We should also limit the ability to target users individually. In the past, ads on sites were ads on sites. You might know what kind of users visited a site and you would place tech ads on tech sites and fashion ads on fashion sites. Now the ads follow you individually. That should be made illegal as it uses data collected from multiple sources and invades our privacy. I also believe there should be regulation as to how location data is used and any information related to our mobile devices. In addition, regulators need to be vigilant as to how companies that have monopoly power use their power. That kind of goes without saying. Companies with monopoly powers should not be able to use those powers when competing in an open market or using their monopoly services to limit competition."
Objekt: Google ; Chrome ; Vivaldi