Search (916 results, page 46 of 46)

  • × theme_ss:"Suchmaschinen"
  1. Sleem-Amer, M.; Bigorgne, I.; Brizard, S.; Santos, L.D.P.D.; Bouhairi, Y. El; Goujon, B.; Lorin, S.; Martineau, C.; Rigouste, L.; Varga, L.: Intelligent semantic search engines for opinion and sentiment mining (2012) 0.00
    1.6444239E-4 = product of:
      0.0024666358 = sum of:
        0.0024666358 = product of:
          0.0049332716 = sum of:
            0.0049332716 = weight(_text_:information in 100) [ClassicSimilarity], result of:
              0.0049332716 = score(doc=100,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.09697737 = fieldWeight in 100, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=100)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Source
    Next generation search engines: advanced models for information retrieval. Eds.: C. Jouis, u.a
  2. Lewandowski, D.; Drechsler, J.; Mach, S. von: Deriving query intents from web search engine queries (2012) 0.00
    1.6444239E-4 = product of:
      0.0024666358 = sum of:
        0.0024666358 = product of:
          0.0049332716 = sum of:
            0.0049332716 = weight(_text_:information in 385) [ClassicSimilarity], result of:
              0.0049332716 = score(doc=385,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.09697737 = fieldWeight in 385, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=385)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.9, S.1773-1788
  3. Kruschwitz, U.; Lungley, D.; Albakour, M-D.; Song, D.: Deriving query suggestions for site search (2013) 0.00
    1.6444239E-4 = product of:
      0.0024666358 = sum of:
        0.0024666358 = product of:
          0.0049332716 = sum of:
            0.0049332716 = weight(_text_:information in 1085) [ClassicSimilarity], result of:
              0.0049332716 = score(doc=1085,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.09697737 = fieldWeight in 1085, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1085)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.10, S.1975-1994
  4. Akhigbe, B.I.; Afolabi, B.S.; Adagunodo, E.R.: Modelling user-centered attributes : the Web search engine as a case (2015) 0.00
    1.6444239E-4 = product of:
      0.0024666358 = sum of:
        0.0024666358 = product of:
          0.0049332716 = sum of:
            0.0049332716 = weight(_text_:information in 2100) [ClassicSimilarity], result of:
              0.0049332716 = score(doc=2100,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.09697737 = fieldWeight in 2100, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2100)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    This paper modeled user-centered attributes with which First and Second-order Measurement Models (FSoMM) were proposed using factor analysis in a quantitative evaluative procedure. There was need to relate users needs as requirements for Web Search Engines (WeSEs) in a dynamic context. This informed the motivation for formulating the FSoMM to possess baseline properties with reasonable validity and reliability. This was achieved by considering how users "seek out and use" information as useful characteristics that can suffice as users' attributes. This is because of the belief in this paper that factors modelled from users' attributes encapsulate users' needs. With the qualitative evaluative approach these factors were translated into users' requirements for WeSEs' development. Results obtained showed that both models demonstrated reasonable model fit. Therefore, users' requirements can be communicated with measurement models. As illustrated in this paper, both the qualitative and quantitative evaluative approach remain an invaluable resource in this respect. We therefore infer that WeSEs' success in the delivery of assistance to users, particularly in a dynamic context must be based, not only on the progress of technology, but also on users' requirements.
  5. Cheng, S.; YunTao, P.; JunPeng, Y.; Hong, G.; ZhengLu, Y.; ZhiYu, H.: PageRank, HITS and impact factor for journal ranking (2009) 0.00
    1.6444239E-4 = product of:
      0.0024666358 = sum of:
        0.0024666358 = product of:
          0.0049332716 = sum of:
            0.0049332716 = weight(_text_:information in 2513) [ClassicSimilarity], result of:
              0.0049332716 = score(doc=2513,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.09697737 = fieldWeight in 2513, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2513)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Source
    Proceeding CSIE '09: Proceedings of the 2009 WRI World Congress on Computer Science and Information Engineering - Volume 06
  6. Gencosman, B.C.; Ozmutlu, H.C.; Ozmutlu, S.: Character n-gram application for automatic new topic identification (2014) 0.00
    1.6444239E-4 = product of:
      0.0024666358 = sum of:
        0.0024666358 = product of:
          0.0049332716 = sum of:
            0.0049332716 = weight(_text_:information in 2688) [ClassicSimilarity], result of:
              0.0049332716 = score(doc=2688,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.09697737 = fieldWeight in 2688, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2688)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Source
    Information processing and management. 50(2014) no.6, S.821-856
  7. Schaer, P.; Mayr, P.; Sünkler, S.; Lewandowski, D.: How relevant is the long tail? : a relevance assessment study on million short (2016) 0.00
    1.6444239E-4 = product of:
      0.0024666358 = sum of:
        0.0024666358 = product of:
          0.0049332716 = sum of:
            0.0049332716 = weight(_text_:information in 3144) [ClassicSimilarity], result of:
              0.0049332716 = score(doc=3144,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.09697737 = fieldWeight in 3144, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3144)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Users of web search engines are known to mostly focus on the top ranked results of the search engine result page. While many studies support this well known information seeking pattern only few studies concentrate on the question what users are missing by neglecting lower ranked results. To learn more about the relevance distributions in the so-called long tail we conducted a relevance assessment study with the Million Short long-tail web search engine. While we see a clear difference in the content between the head and the tail of the search engine result list we see no statistical significant differences in the binary relevance judgments and weak significant differences when using graded relevance. The tail contains different but still valuable results. We argue that the long tail can be a rich source for the diversification of web search engine result lists but it needs more evaluation to clearly describe the differences.
  8. Zhitomirsky-Geffet, M.; Bar-Ilan, J.; Levene, M.: Analysis of change in users' assessment of search results over time (2017) 0.00
    1.6444239E-4 = product of:
      0.0024666358 = sum of:
        0.0024666358 = product of:
          0.0049332716 = sum of:
            0.0049332716 = weight(_text_:information in 3593) [ClassicSimilarity], result of:
              0.0049332716 = score(doc=3593,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.09697737 = fieldWeight in 3593, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3593)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.5, S.1137-1148
  9. Zhao, Y.; Ma, F.; Xia, X.: Evaluating the coverage of entities in knowledge graphs behind general web search engines : Poster (2017) 0.00
    1.6444239E-4 = product of:
      0.0024666358 = sum of:
        0.0024666358 = product of:
          0.0049332716 = sum of:
            0.0049332716 = weight(_text_:information in 3854) [ClassicSimilarity], result of:
              0.0049332716 = score(doc=3854,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.09697737 = fieldWeight in 3854, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3854)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Web search engines, such as Google and Bing, are constantly employing results from knowledge organization and various visualization features to improve their search services. Knowledge graph, a large repository of structured knowledge represented by formal languages such as RDF (Resource Description Framework), is used to support entity search feature of Google and Bing (Demartini, 2016). When a user searchs for an entity, such as a person, an organization, or a place in Google or Bing, it is likely that a knowledge cardwill be presented on the right side bar of the search engine result pages (SERPs). For example, when a user searches the entity Benedict Cumberbatch on Google, the knowledge card will show the basic structured information about this person, including his date of birth, height, spouse, parents, and his movies, etc. The knowledge card, which is used to present the result of entity search, is generated from knowledge graphs. Therefore, the quality of knowledge graphs is essential to the performance of entity search. However, studies on the quality of knowledge graphs from the angle of entity coverage are scant in the literature. This study aims to investigate the coverage of entities of knowledge graphs behind Google and Bing.
  10. Sarigil, E.; Sengor Altingovde, I.; Blanco, R.; Barla Cambazoglu, B.; Ozcan, R.; Ulusoy, Ö.: Characterizing, predicting, and handling web search queries that match very few or no results (2018) 0.00
    1.6444239E-4 = product of:
      0.0024666358 = sum of:
        0.0024666358 = product of:
          0.0049332716 = sum of:
            0.0049332716 = weight(_text_:information in 4039) [ClassicSimilarity], result of:
              0.0049332716 = score(doc=4039,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.09697737 = fieldWeight in 4039, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4039)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Source
    Journal of the Association for Information Science and Technology. 69(2018) no.2, S.256-270
  11. Ford, N.; Miller, D.; Moss, N.: Web search strategies and approaches to studying (2003) 0.00
    1.3155391E-4 = product of:
      0.0019733086 = sum of:
        0.0019733086 = product of:
          0.0039466172 = sum of:
            0.0039466172 = weight(_text_:information in 5167) [ClassicSimilarity], result of:
              0.0039466172 = score(doc=5167,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.0775819 = fieldWeight in 5167, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5167)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.6, S.473-488
  12. Bar-Ilan, J.; Levene, M.; Mat-Hassan, M.: Methods for evaluating dynamic changes in search engine rankings : a case study (2006) 0.00
    1.3155391E-4 = product of:
      0.0019733086 = sum of:
        0.0019733086 = product of:
          0.0039466172 = sum of:
            0.0039466172 = weight(_text_:information in 616) [ClassicSimilarity], result of:
              0.0039466172 = score(doc=616,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.0775819 = fieldWeight in 616, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=616)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Purpose - The objective of this paper is to characterize the changes in the rankings of the top ten results of major search engines over time and to compare the rankings between these engines. Design/methodology/approach - The papers compare rankings of the top-ten results of the search engines Google and AlltheWeb on ten identical queries over a period of three weeks. Only the top-ten results were considered, since users do not normally inspect more than the first results page returned by a search engine. The experiment was repeated twice, in October 2003 and in January 2004, in order to assess changes to the top-ten results of some of the queries during the three months interval. In order to assess the changes in the rankings, three measures were computed for each data collection point and each search engine. Findings - The findings in this paper show that the rankings of AlltheWeb were highly stable over each period, while the rankings of Google underwent constant yet minor changes, with occasional major ones. Changes over time can be explained by the dynamic nature of the web or by fluctuations in the search engines' indexes. The top-ten results of the two search engines had surprisingly low overlap. With such small overlap, the task of comparing the rankings of the two engines becomes extremely challenging. Originality/value - The paper shows that because of the abundance of information on the web, ranking search results is of extreme importance. The paper compares several measures for computing the similarity between rankings of search tools, and shows that none of the measures is fully satisfactory as a standalone measure. It also demonstrates the apparent differences in the ranking algorithms of two widely used search engines.
  13. Spink, A.; Jansen, B.J.; Blakely, C.; Koshman, S.: ¬A study of results overlap and uniqueness among major Web search engines (2006) 0.00
    1.3155391E-4 = product of:
      0.0019733086 = sum of:
        0.0019733086 = product of:
          0.0039466172 = sum of:
            0.0039466172 = weight(_text_:information in 993) [ClassicSimilarity], result of:
              0.0039466172 = score(doc=993,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.0775819 = fieldWeight in 993, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=993)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Source
    Information processing and management. 42(2006) no.5, S.1379-1391
  14. Radhakrishnan, A.: Swoogle : an engine for the Semantic Web (2007) 0.00
    1.3155391E-4 = product of:
      0.0019733086 = sum of:
        0.0019733086 = product of:
          0.0039466172 = sum of:
            0.0039466172 = weight(_text_:information in 4709) [ClassicSimilarity], result of:
              0.0039466172 = score(doc=4709,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.0775819 = fieldWeight in 4709, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4709)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Content
    "Swoogle, the Semantic web search engine, is a research project carried out by the ebiquity research group in the Computer Science and Electrical Engineering Department at the University of Maryland. It's an engine tailored towards finding documents on the semantic web. The whole research paper is available here. Semantic web is touted as the next generation of online content representation where the web documents are represented in a language that is not only easy for humans but is machine readable (easing the integration of data as never thought possible) as well. And the main elements of the semantic web include data model description formats such as Resource Description Framework (RDF), a variety of data interchange formats (e.g. RDF/XML, Turtle, N-Triples), and notations such as RDF Schema (RDFS), the Web Ontology Language (OWL), all of which are intended to provide a formal description of concepts, terms, and relationships within a given knowledge domain (Wikipedia). And Swoogle is an attempt to mine and index this new set of web documents. The engine performs crawling of semantic documents like most web search engines and the search is available as web service too. The engine is primarily written in Java with the PHP used for the front-end and MySQL for database. Swoogle is capable of searching over 10,000 ontologies and indexes more that 1.3 million web documents. It also computes the importance of a Semantic Web document. The techniques used for indexing are the more google-type page ranking and also mining the documents for inter-relationships that are the basis for the semantic web. For more information on how the RDF framework can be used to relate documents, read the link here. Being a research project, and with a non-commercial motive, there is not much hype around Swoogle. However, the approach to indexing of Semantic web documents is an approach that most engines will have to take at some point of time. When the Internet debuted, there were no specific engines available for indexing or searching. The Search domain only picked up as more and more content became available. One fundamental question that I've always wondered about it is - provided that the search engines return very relevant results for a query - how to ascertain that the documents are indeed the most relevant ones available. There is always an inherent delay in indexing of document. Its here that the new semantic documents search engines can close delay. Experimenting with the concept of Search in the semantic web can only bore well for the future of search technology."
  15. Bressan, M.; Peserico, E.: Choose the damping, choose the ranking? (2010) 0.00
    1.3155391E-4 = product of:
      0.0019733086 = sum of:
        0.0019733086 = product of:
          0.0039466172 = sum of:
            0.0039466172 = weight(_text_:information in 2563) [ClassicSimilarity], result of:
              0.0039466172 = score(doc=2563,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.0775819 = fieldWeight in 2563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2563)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    To what extent can changes in PageRank's damping factor affect node ranking? We prove that, at least on some graphs, the top k nodes assume all possible k! orderings as the damping factor varies, even if it varies within an arbitrarily small interval (e.g. [0.84999,0.85001][0.84999,0.85001]). Thus, the rank of a node for a given (finite set of discrete) damping factor(s) provides very little information about the rank of that node as the damping factor varies over a continuous interval. We bypass this problem introducing lineage analysis and proving that there is a simple condition, with a "natural" interpretation independent of PageRank, that allows one to verify "in one shot" if a node outperforms another simultaneously for all damping factors and all damping variables (informally, time variant damping factors). The novel notions of strong rank and weak rank of a node provide a measure of the fuzziness of the rank of that node, of the objective orderability of a graph's nodes, and of the quality of results returned by different ranking algorithms based on the random surfer model. We deploy our analytical tools on a 41M node snapshot of the .it Web domain and on a 0.7M node snapshot of the CiteSeer citation graph. Among other findings, we show that rank is indeed relatively stable in both graphs; that "classic" PageRank (d=0.85) marginally outperforms Weighted In-degree (d->0), mainly due to its ability to ferret out "niche" items; and that, for both the Web and CiteSeer, the ideal damping factor appears to be 0.8-0.9 to obtain those items of high importance to at least one (model of randomly surfing) user, but only 0.5-0.6 to obtain those items important to every (model of randomly surfing) user.
  16. Keith, S.: Searching for news headlines : connections between unresolved hyperlinking issues and a new battle over copyright online (2007) 0.00
    1.1510967E-4 = product of:
      0.001726645 = sum of:
        0.001726645 = product of:
          0.00345329 = sum of:
            0.00345329 = weight(_text_:information in 351) [ClassicSimilarity], result of:
              0.00345329 = score(doc=351,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.06788416 = fieldWeight in 351, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=351)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    In March 2005, the Paris-based news service Agence France Presse (AFP) sued Google Inc. in an American court, charging that the search engine's news aggregator program had illegally infringed the wire service's copyright. The lawsuit, filed in the u.s. District Court for the District of Columbia, claimed that Google News had engaged in the infringement since its launch in September 2002 by »reproducing and publicly displaying AFP's photographs, headlines, and story leads« . The claim also said that Google News had ignored requests that it cease and desist the infringement, and it asked for more than $17 million (about 13.6 million Euros) in damages. Within a few days, Google News was removing links t0 Agence France Presse news articles and photographs.1 However, Agence France Presse said it would still pursue the lawsuit because 0f the licensing fees it was owed as a result of what it claimed was Google's past copyright infringement. The case, which was still pending in early 2007, as the sides struggled to reconstruct and evaluate specific past Google News pageso, was interesting for several reasons. First, it pitted the company that owns the world's most popular search engine against the world's oldest news service; Agence France Presse was founded in Paris in 1835 by Charles-Louis Havas, sometimes known as the father of global journalismo. Second, the copyright-infringement allegations made by AFP had not been made by most of the 4,500 or so other news organizations whose material is used in exactly the same way on Google News every day, though Google did lose somewhat similar cases in German and Belgian courts in 2004 and 2006, respectively. Third, AFP's assertions and Google's counter claims offer an intriguing argument about the nature of key components of traditional and new-media journalism, especially news headlines. Finally, the case warrants further examination because it is essentially an argument over the fundamental nature of Internet hyperlinking. Some commentators have noted that a ruling against Google could be disastrous for blogs, which also often quote news storieso, while other commentators have concluded that a victory for Agence France Presse would call into question the future of online news aggregatorso. This chapter uses the Agence France Presse lawsuit as a way to examine arguments about the legality of news aggregator links to copyrighted material. Using traditional legal research methods, it attempts to put the case into context by referring to key u.s. and European Internet hyperlinking lawsuits from the 1990s through 2006. The chapter also discusses the nature of specific traditional journalistic forms such as headlines and story leads and whether they can be copyrighted. Finally, the chapter argues that out-of-court settlements and conflicting court rulings have left considerable ambiguity around the intersection of copyright, free speech, and information-cataloging concerns, leaving Google News and other aggregators vulnerable to claims of copyright infringement.

Years

Languages

Types

  • a 781
  • el 81
  • m 70
  • s 15
  • x 15
  • p 2
  • r 2
  • More… Less…

Subjects

Classifications