Search (13 results, page 1 of 1)

  • × author_ss:"Lalmas, M."
  1. Arapakis, I.; Lalmas, M.; Ceylan, H.; Donmez, P.: Automatically embedding newsworthy links to articles : from implementation to evaluation (2014) 0.02
    0.024930632 = product of:
      0.12465315 = sum of:
        0.053270485 = weight(_text_:evaluation in 1185) [ClassicSimilarity], result of:
          0.053270485 = score(doc=1185,freq=6.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.40136236 = fieldWeight in 1185, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1185)
        0.01861633 = weight(_text_:web in 1185) [ClassicSimilarity], result of:
          0.01861633 = score(doc=1185,freq=2.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.18028519 = fieldWeight in 1185, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1185)
        0.052766338 = weight(_text_:site in 1185) [ClassicSimilarity], result of:
          0.052766338 = score(doc=1185,freq=2.0), product of:
            0.1738463 = queryWeight, product of:
              5.494352 = idf(docFreq=493, maxDocs=44218)
              0.031640913 = queryNorm
            0.3035229 = fieldWeight in 1185, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.494352 = idf(docFreq=493, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1185)
      0.2 = coord(3/15)
    
    Abstract
    News portals are a popular destination for web users. News providers are therefore interested in attaining higher visitor rates and promoting greater engagement with their content. One aspect of engagement deals with keeping users on site longer by allowing them to have enhanced click-through experiences. News portals have invested in ways to embed links within news stories but so far these links have been curated by news editors. Given the manual effort involved, the use of such links is limited to a small scale. In this article, we evaluate a system-based approach that detects newsworthy events in a news article and locates other articles related to these events. Our system does not rely on resources like Wikipedia to identify events, and it was designed to be domain independent. A rigorous evaluation, using Amazon's Mechanical Turk, was performed to assess the system-embedded links against the manually-curated ones. Our findings reveal that our system's performance is comparable with that of professional editors, and that users find the automatically generated highlights interesting and the associated articles worthy of reading. Our evaluation also provides quantitative and qualitative insights into the curation of links, from the perspective of users and professional editors.
  2. Arapakis, I.; Cambazoglu, B.B.; Lalmas, M.: On the feasibility of predicting popular news at cold start (2017) 0.02
    0.018095579 = product of:
      0.09047789 = sum of:
        0.011384088 = product of:
          0.022768175 = sum of:
            0.022768175 = weight(_text_:online in 3595) [ClassicSimilarity], result of:
              0.022768175 = score(doc=3595,freq=4.0), product of:
                0.096027054 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031640913 = queryNorm
                0.23710167 = fieldWeight in 3595, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3595)
          0.5 = coord(1/2)
        0.026327467 = weight(_text_:web in 3595) [ClassicSimilarity], result of:
          0.026327467 = score(doc=3595,freq=4.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.25496176 = fieldWeight in 3595, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3595)
        0.052766338 = weight(_text_:site in 3595) [ClassicSimilarity], result of:
          0.052766338 = score(doc=3595,freq=2.0), product of:
            0.1738463 = queryWeight, product of:
              5.494352 = idf(docFreq=493, maxDocs=44218)
              0.031640913 = queryNorm
            0.3035229 = fieldWeight in 3595, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.494352 = idf(docFreq=493, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3595)
      0.2 = coord(3/15)
    
    Abstract
    Prominent news sites on the web provide hundreds of news articles daily. The abundance of news content competing to attract online attention, coupled with the manual effort involved in article selection, necessitates the timely prediction of future popularity of these news articles. The future popularity of a news article can be estimated using signals indicating the article's penetration in social media (e.g., number of tweets) in addition to traditional web analytics (e.g., number of page views). In practice, it is important to make such estimations as early as possible, preferably before the article is made available on the news site (i.e., at cold start). In this paper we perform a study on cold-start news popularity prediction using a collection of 13,319 news articles obtained from Yahoo News, a major news provider. We characterize the popularity of news articles through a set of online metrics and try to predict their values across time using machine learning techniques on a large collection of features obtained from various sources. Our findings indicate that predicting news popularity at cold start is a difficult task, contrary to the findings of a prior work on the same topic. Most articles' popularity may not be accurately anticipated solely on the basis of content features, without having the early-stage popularity values.
  3. Nikolov, D.; Lalmas, M.; Flammini, A.; Menczer, F.: Quantifying biases in online information exposure (2019) 0.01
    0.005656933 = product of:
      0.042426996 = sum of:
        0.01609953 = product of:
          0.03219906 = sum of:
            0.03219906 = weight(_text_:online in 4986) [ClassicSimilarity], result of:
              0.03219906 = score(doc=4986,freq=8.0), product of:
                0.096027054 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031640913 = queryNorm
                0.33531237 = fieldWeight in 4986, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4986)
          0.5 = coord(1/2)
        0.026327467 = weight(_text_:web in 4986) [ClassicSimilarity], result of:
          0.026327467 = score(doc=4986,freq=4.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.25496176 = fieldWeight in 4986, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4986)
      0.13333334 = coord(2/15)
    
    Abstract
    Our consumption of online information is mediated by filtering, ranking, and recommendation algorithms that introduce unintentional biases as they attempt to deliver relevant and engaging content. It has been suggested that our reliance on online technologies such as search engines and social media may limit exposure to diverse points of view and make us vulnerable to manipulation by disinformation. In this article, we mine a massive data set of web traffic to quantify two kinds of bias: (i) homogeneity bias, which is the tendency to consume content from a narrow set of information sources, and (ii) popularity bias, which is the selective exposure to content from top sites. Our analysis reveals different bias levels across several widely used web platforms. Search exposes users to a diverse set of sources, while social media traffic tends to exhibit high popularity and homogeneity bias. When we focus our analysis on traffic to news sites, we find higher levels of popularity bias, with smaller differences across applications. Overall, our results quantify the extent to which our choices of online systems confine us inside "social bubbles."
  4. Blanke, T.; Lalmas, M.; Huibers, T.: ¬A framework for the theoretical evaluation of XML retrieval (2012) 0.00
    0.004261639 = product of:
      0.06392458 = sum of:
        0.06392458 = weight(_text_:evaluation in 509) [ClassicSimilarity], result of:
          0.06392458 = score(doc=509,freq=6.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.48163486 = fieldWeight in 509, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.046875 = fieldNorm(doc=509)
      0.06666667 = coord(1/15)
    
    Abstract
    We present a theoretical framework to evaluate XML retrieval. XML retrieval deals with retrieving those document components-the XML elements-that specifically answer a query. In this article, theoretical evaluation is concerned with the formal representation of qualitative properties of retrieval models. It complements experimental methods by showing the properties of the underlying reasoning assumptions that decide when a document is about a query. We define a theoretical methodology based on the idea of "aboutness" and apply it to current XML retrieval models. This allows comparing and analyzing the reasoning behavior of XML retrieval models experimented within the INEX evaluation campaigns. For each model we derive functional and qualitative properties that qualify its formal behavior. We then use these properties to explain experimental results obtained with some of the XML retrieval models.
  5. Szlávik, Z.; Tombros, A.; Lalmas, M.: Summarisation of the logical structure of XML documents (2012) 0.00
    0.004261639 = product of:
      0.06392458 = sum of:
        0.06392458 = weight(_text_:evaluation in 2731) [ClassicSimilarity], result of:
          0.06392458 = score(doc=2731,freq=6.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.48163486 = fieldWeight in 2731, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.046875 = fieldNorm(doc=2731)
      0.06666667 = coord(1/15)
    
    Abstract
    Summarisation is traditionally used to produce summaries of the textual contents of documents. In this paper, it is argued that summarisation methods can also be applied to the logical structure of XML documents. Structure summarisation selects the most important elements of the logical structure and ensures that the user's attention is focused towards sections, subsections, etc. that are believed to be of particular interest. Structure summaries are shown to users as hierarchical tables of contents. This paper discusses methods for structure summarisation that use various features of XML elements in order to select document portions that a user's attention should be focused to. An evaluation methodology for structure summarisation is also introduced and summarisation results using various summariser versions are presented and compared to one another. We show that data sets used in information retrieval evaluation can be used effectively in order to produce high quality (query independent) structure summaries. We also discuss the choice and effectiveness of particular summariser features with respect to several evaluation measures.
  6. Kazai, G.; Lalmas, M.: ¬The overlap problem in content-oriented XML retrieval evaluation (2004) 0.00
    0.0041007637 = product of:
      0.061511453 = sum of:
        0.061511453 = weight(_text_:evaluation in 4083) [ClassicSimilarity], result of:
          0.061511453 = score(doc=4083,freq=2.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.4634533 = fieldWeight in 4083, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.078125 = fieldNorm(doc=4083)
      0.06666667 = coord(1/15)
    
  7. Lalmas, M.; Ruthven, I.: Representing and retrieving structured documents using the Dempster-Shafer theory of evidence : modelling and evaluation (1998) 0.00
    0.0040595494 = product of:
      0.060893238 = sum of:
        0.060893238 = weight(_text_:evaluation in 1076) [ClassicSimilarity], result of:
          0.060893238 = score(doc=1076,freq=4.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.4587954 = fieldWeight in 1076, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1076)
      0.06666667 = coord(1/15)
    
    Abstract
    Reports on a theoretical model of structured document indexing and retrieval based on the Dempster-Schafer Theory of Evidence. Includes a description of the model of structured document retrieval, the representation of structured documents, the representation of individual components, how components are combined, details of the combination process, and how relevance is captured within the model. Also presents a detailed account of an implementation of the model, and an evaluation scheme designed to test the effectiveness of the model
  8. Kazai, G.; Lalmas, M.; Fuhr, N.; Gövert, N.: ¬A report an the first year of the INitiative for the Evaluation of XML Retrieval (INEX'02) (2004) 0.00
    0.0040595494 = product of:
      0.060893238 = sum of:
        0.060893238 = weight(_text_:evaluation in 2267) [ClassicSimilarity], result of:
          0.060893238 = score(doc=2267,freq=4.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.4587954 = fieldWeight in 2267, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2267)
      0.06666667 = coord(1/15)
    
    Abstract
    The INitiative for the Evaluation of XML retrieval (INEX) aims at providing an infrastructure to evaluate the effectiveness of content-oriented XML retrieval systems. To this end, in the first round of INEX in 2002, a test collection of real world XML documents along with a set of topics and respective relevance assessments have been created with the collaboration of 36 participating organizations. In this article, we provide an overview of the first round of the INEX initiative.
  9. Ruthven, T.; Lalmas, M.; Rijsbergen, K.van: Incorporating user research behavior into relevance feedback (2003) 0.00
    0.0020503819 = product of:
      0.030755727 = sum of:
        0.030755727 = weight(_text_:evaluation in 5169) [ClassicSimilarity], result of:
          0.030755727 = score(doc=5169,freq=2.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.23172665 = fieldWeight in 5169, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5169)
      0.06666667 = coord(1/15)
    
    Abstract
    Ruthven, Mounia, and van Rijsbergen rank and select terms for query expansion using information gathered on searcher evaluation behavior. Using the TREC Financial Times and Los Angeles Times collections and search topics from TREC-6 placed in simulated work situations, six student subjects each preformed three searches on an experimental system and three on a control system with instructions to search by natural language expression in any way they found comfortable. Searching was analyzed for behavior differences between experimental and control situations, and for effectiveness and perceptions. In three experiments paired t-tests were the analysis tool with controls being a no relevance feedback system, a standard ranking for automatic expansion system, and a standard ranking for interactive expansion while the experimental systems based ranking upon user information on temporal relevance and partial relevance. Two further experiments compare using user behavior (number assessed relevant and similarity of relevant documents) to choose a query expansion technique against a non-selective technique and finally the effect of providing the user with knowledge of the process. When partial relevance data and time of assessment data are incorporated in term ranking more relevant documents were recovered in fewer iterations, however retrieval effectiveness overall was not improved. The subjects, none-the-less, rated the suggested terms as more useful and used them more heavily. Explanations of what the feedback techniques were doing led to higher use of the techniques.
  10. Lalmas, M.: XML retrieval (2009) 0.00
    0.0020503819 = product of:
      0.030755727 = sum of:
        0.030755727 = weight(_text_:evaluation in 4998) [ClassicSimilarity], result of:
          0.030755727 = score(doc=4998,freq=2.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.23172665 = fieldWeight in 4998, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4998)
      0.06666667 = coord(1/15)
    
    Abstract
    Documents usually have a content and a structure. The content refers to the text of the document, whereas the structure refers to how a document is logically organized. An increasingly common way to encode the structure is through the use of a mark-up language. Nowadays, the most widely used mark-up language for representing structure is the eXtensible Mark-up Language (XML). XML can be used to provide a focused access to documents, i.e. returning XML elements, such as sections and paragraphs, instead of whole documents in response to a query. Such focused strategies are of particular benefit for information repositories containing long documents, or documents covering a wide variety of topics, where users are directed to the most relevant content within a document. The increased adoption of XML to represent a document structure requires the development of tools to effectively access documents marked-up in XML. This book provides a detailed description of query languages, indexing strategies, ranking algorithms, presentation scenarios developed to access XML documents. Major advances in XML retrieval were seen from 2002 as a result of INEX, the Initiative for Evaluation of XML Retrieval. INEX, also described in this book, provided test sets for evaluating XML retrieval effectiveness. Many of the developments and results described in this book were investigated within INEX.
  11. Arapakis, I.; Lalmas, M.; Cambazoglu, B.B.; MarcosM.-C.; Jose, J.M.: User engagement in online news : under the scope of sentiment, interest, affect, and gaze (2014) 0.00
    0.001073302 = product of:
      0.01609953 = sum of:
        0.01609953 = product of:
          0.03219906 = sum of:
            0.03219906 = weight(_text_:online in 1497) [ClassicSimilarity], result of:
              0.03219906 = score(doc=1497,freq=8.0), product of:
                0.096027054 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031640913 = queryNorm
                0.33531237 = fieldWeight in 1497, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1497)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Online content providers, such as news portals and social media platforms, constantly seek new ways to attract large shares of online attention by keeping their users engaged. A common challenge is to identify which aspects of online interaction influence user engagement the most. In this article, through an analysis of a news article collection obtained from Yahoo News US, we demonstrate that news articles exhibit considerable variation in terms of the sentimentality and polarity of their content, depending on factors such as news provider and genre. Moreover, through a laboratory study, we observe the effect of sentimentality and polarity of news and comments on a set of subjective and objective measures of engagement. In particular, we show that attention, affect, and gaze differ across news of varying interestingness. As part of our study, we also explore methods that exploit the sentiments expressed in user comments to reorder the lists of comments displayed in news pages. Our results indicate that user engagement can be anticipated predicted if we account for the sentimentality and polarity of the content as well as other factors that drive attention and inspire human curiosity.
  12. Crestani, F.; Dominich, S.; Lalmas, M.; Rijsbergen, C.J.K. van: Mathematical, logical, and formal methods in information retrieval : an introduction to the special issue (2003) 0.00
    8.573814E-4 = product of:
      0.01286072 = sum of:
        0.01286072 = product of:
          0.02572144 = sum of:
            0.02572144 = weight(_text_:22 in 1451) [ClassicSimilarity], result of:
              0.02572144 = score(doc=1451,freq=2.0), product of:
                0.110801086 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.031640913 = queryNorm
                0.23214069 = fieldWeight in 1451, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1451)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Date
    22. 3.2003 19:27:36
  13. Lehmann, J.; Castillo, C.; Lalmas, M.; Baeza-Yates, R.: Story-focused reading in online news and its potential for user engagement (2017) 0.00
    5.36651E-4 = product of:
      0.008049765 = sum of:
        0.008049765 = product of:
          0.01609953 = sum of:
            0.01609953 = weight(_text_:online in 3529) [ClassicSimilarity], result of:
              0.01609953 = score(doc=3529,freq=2.0), product of:
                0.096027054 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031640913 = queryNorm
                0.16765618 = fieldWeight in 3529, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3529)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)