Search (4 results, page 1 of 1)

  • × author_ss:"Azzopardi, L."
  1. Balog, K.; Azzopardi, L.; Rijke, M. de: ¬A language modeling framework for expert finding (2009) 0.00
    0.0015457221 = product of:
      0.009274333 = sum of:
        0.009274333 = weight(_text_:in in 2447) [ClassicSimilarity], result of:
          0.009274333 = score(doc=2447,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1561842 = fieldWeight in 2447, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2447)
      0.16666667 = coord(1/6)
    
    Abstract
    Statistical language models have been successfully applied to many information retrieval tasks, including expert finding: the process of identifying experts given a particular topic. In this paper, we introduce and detail language modeling approaches that integrate the representation, association and search of experts using various textual data sources into a generative probabilistic framework. This provides a simple, intuitive, and extensible theoretical framework to underpin research into expertise search. To demonstrate the flexibility of the framework, two search strategies to find experts are modeled that incorporate different types of evidence extracted from the data, before being extended to also incorporate co-occurrence information. The models proposed are evaluated in the context of enterprise search systems within an intranet environment, where it is reasonable to assume that the list of experts is known, and that data to be mined is publicly accessible. Our experiments show that excellent performance can be achieved by using these models in such environments, and that this theoretical and empirical work paves the way for future principled extensions.
  2. Russell-Rose, T.; Chamberlain, J.; Azzopardi, L.: Information retrieval in the workplace : a comparison of professional search practices (2018) 0.00
    0.0014873719 = product of:
      0.008924231 = sum of:
        0.008924231 = weight(_text_:in in 5048) [ClassicSimilarity], result of:
          0.008924231 = score(doc=5048,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.15028831 = fieldWeight in 5048, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5048)
      0.16666667 = coord(1/6)
    
    Abstract
    Legal researchers, recruitment professionals, healthcare information professionals, and patent analysts all undertake work tasks where search forms a core part of their duties. In these instances, the search task is often complex and time-consuming and requires specialist expertise to identify relevant documents and insights within large domain-specific repositories and collections. Several studies have been made investigating the search practices of professionals such as these, but few have attempted to directly compare their professional practices and so it remains unclear to what extent insights and approaches from one domain can be applied to another. In this paper we describe the results of a survey of a purposive sample of 108 legal researchers, 64 recruitment professionals and 107 healthcare information professionals. Their responses are compared with results from a previous survey of 81 patent analysts. The survey investigated their search practices and preferences, the types of functionality they value, and their requirements for future information retrieval systems. The results reveal that these professions share many fundamental needs and face similar challenges. In particular a continuing preference to formulate queries as Boolean expressions, the need to manage, organise and re-use search strategies and results and an ambivalence toward the use of relevance ranking. The results stress the importance of recall and coverage for the healthcare and patent professionals, while precision and recency were more important to the legal and recruitment professionals. The results also highlight the need to ensure that search systems give confidence to the professional searcher and so trust, explainability and accountability remains a significant challenge when developing such systems. The findings suggest that translational research between the different areas could benefit professionals across domains.
  3. Ruthven, I.; Baillie, M.; Azzopardi, L.; Bierig, R.; Nicol, E.; Sweeney, S.; Yaciki, M.: Contextual factors affecting the utility of surrogates within exploratory search (2008) 0.00
    0.0014724231 = product of:
      0.008834538 = sum of:
        0.008834538 = weight(_text_:in in 2042) [ClassicSimilarity], result of:
          0.008834538 = score(doc=2042,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.14877784 = fieldWeight in 2042, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2042)
      0.16666667 = coord(1/6)
    
    Abstract
    In this paper we investigate how information surrogates might be useful in exploratory search and what information it is useful for a surrogate to contain. By comparing assessments based on artificially created information surrogates, we investigate the effect of the source of information, the quality of an information source and the date of information upon the assessment process. We also investigate how varying levels of topical knowledge, assessor confidence and prior expectation affect the assessment of information surrogates. We show that both types of contextual information affect how the information surrogates are judged and what actions are performed as a result of the surrogates.
  4. Baillie, M.; Azzopardi, L.; Ruthven, I.: Evaluating epistemic uncertainty under incomplete assessments (2008) 0.00
    0.0012620769 = product of:
      0.0075724614 = sum of:
        0.0075724614 = weight(_text_:in in 2065) [ClassicSimilarity], result of:
          0.0075724614 = score(doc=2065,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.12752387 = fieldWeight in 2065, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2065)
      0.16666667 = coord(1/6)
    
    Abstract
    The thesis of this study is to propose an extended methodology for laboratory based Information Retrieval evaluation under incomplete relevance assessments. This new methodology aims to identify potential uncertainty during system comparison that may result from incompleteness. The adoption of this methodology is advantageous, because the detection of epistemic uncertainty - the amount of knowledge (or ignorance) we have about the estimate of a system's performance - during the evaluation process can guide and direct researchers when evaluating new systems over existing and future test collections. Across a series of experiments we demonstrate how this methodology can lead towards a finer grained analysis of systems. In particular, we show through experimentation how the current practice in Information Retrieval evaluation of using a measurement depth larger than the pooling depth increases uncertainty during system comparison.