Search (9 results, page 1 of 1)

  • × author_ss:"MacFarlane, A."
  • × year_i:[2000 TO 2010}
  1. MacFarlane, A.: Evaluation of web search for the information practitioner (2007) 0.01
    0.009637499 = product of:
      0.06746249 = sum of:
        0.055354897 = weight(_text_:web in 817) [ClassicSimilarity], result of:
          0.055354897 = score(doc=817,freq=14.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.57238775 = fieldWeight in 817, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=817)
        0.012107591 = weight(_text_:information in 817) [ClassicSimilarity], result of:
          0.012107591 = score(doc=817,freq=8.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23274569 = fieldWeight in 817, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=817)
      0.14285715 = coord(2/14)
    
    Abstract
    Purpose - The aim of the paper is to put forward a structured mechanism for web search evaluation. The paper seeks to point to useful scientific research and show how information practitioners can use these methods in evaluation of search on the web for their users. Design/methodology/approach - The paper puts forward an approach which utilizes traditional laboratory-based evaluation measures such as average precision/precision at N documents, augmented with diagnostic measures such as link broken, etc., which are used to show why precision measures are depressed as well as the quality of the search engines crawling mechanism. Findings - The paper shows how to use diagnostic measures in conjunction with precision in order to evaluate web search. Practical implications - The methodology presented in this paper will be useful to any information professional who regularly uses web search as part of their information seeking and needs to evaluate web search services. Originality/value - The paper argues that the use of diagnostic measures is essential in web search, as precision measures on their own do not allow a searcher to understand why search results differ between search engines.
  2. MacFarlane, A.; McCann, J.A.; Robertson, S.E.: Parallel methods for the update of partitioned inverted files (2007) 0.01
    0.009356454 = product of:
      0.04366345 = sum of:
        0.017435152 = weight(_text_:web in 819) [ClassicSimilarity], result of:
          0.017435152 = score(doc=819,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.18028519 = fieldWeight in 819, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=819)
        0.0050448296 = weight(_text_:information in 819) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=819,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 819, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=819)
        0.021183468 = weight(_text_:retrieval in 819) [ClassicSimilarity], result of:
          0.021183468 = score(doc=819,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.23632148 = fieldWeight in 819, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=819)
      0.21428572 = coord(3/14)
    
    Abstract
    Purpose - An issue that tends to be ignored in information retrieval is the issue of updating inverted files. This is largely because inverted files were devised to provide fast query service, and much work has been done with the emphasis strongly on queries. This paper aims to study the effect of using parallel methods for the update of inverted files in order to reduce costs, by looking at two types of partitioning for inverted files: document identifier and term identifier. Design/methodology/approach - Raw update service and update with query service are studied with these partitioning schemes using an incremental update strategy. The paper uses standard measures used in parallel computing such as speedup to examine the computing results and also the costs of reorganising indexes while servicing transactions. Findings - Empirical results show that for both transaction processing and index reorganisation the document identifier method is superior. However, there is evidence that the term identifier partitioning method could be useful in a concurrent transaction processing context. Practical implications - There is an increasing need to service updates, which is now becoming a requirement of inverted files (for dynamic collections such as the web), demonstrating that a shift in requirements of inverted file maintenance is needed from the past. Originality/value - The paper is of value to database administrators who manage large-scale and dynamic text collections, and who need to use parallel computing to implement their text retrieval services.
  3. MacFarlane, A.; Robertson, S.E.; McCann, J.A.: Parallel computing for passage retrieval (2004) 0.01
    0.006371425 = product of:
      0.044599973 = sum of:
        0.033893548 = weight(_text_:retrieval in 5108) [ClassicSimilarity], result of:
          0.033893548 = score(doc=5108,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.37811437 = fieldWeight in 5108, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=5108)
        0.010706427 = product of:
          0.032119278 = sum of:
            0.032119278 = weight(_text_:22 in 5108) [ClassicSimilarity], result of:
              0.032119278 = score(doc=5108,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.30952093 = fieldWeight in 5108, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5108)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    In this paper methods for both speeding up passage processing and examining more passages using parallel computers are explored. The number of passages processed are varied in order to examine the effect on retrieval effectiveness and efficiency. The particular algorithm applied has previously been used to good effect in Okapi experiments at TREC. This algorithm and the mechanism for applying parallel computing to speed up processing are described.
    Date
    20. 1.2007 18:30:22
  4. MacFarlane, A.; Tuson, A.: Local search : a guide for the information retrieval practitioner (2009) 0.01
    0.0061772587 = product of:
      0.043240808 = sum of:
        0.012107591 = weight(_text_:information in 2457) [ClassicSimilarity], result of:
          0.012107591 = score(doc=2457,freq=8.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23274569 = fieldWeight in 2457, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2457)
        0.031133216 = weight(_text_:retrieval in 2457) [ClassicSimilarity], result of:
          0.031133216 = score(doc=2457,freq=6.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.34732026 = fieldWeight in 2457, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2457)
      0.14285715 = coord(2/14)
    
    Abstract
    There are a number of combinatorial optimisation problems in information retrieval in which the use of local search methods are worthwhile. The purpose of this paper is to show how local search can be used to solve some well known tasks in information retrieval (IR), how previous research in the field is piecemeal, bereft of a structure and methodologically flawed, and to suggest more rigorous ways of applying local search methods to solve IR problems. We provide a query based taxonomy for analysing the use of local search in IR tasks and an overview of issues such as fitness functions, statistical significance and test collections when conducting experiments on combinatorial optimisation problems. The paper gives a guide on the pitfalls and problems for IR practitioners who wish to use local search to solve their research issues, and gives practical advice on the use of such methods. The query based taxonomy is a novel structure which can be used by the IR practitioner in order to examine the use of local search in IR.
    Source
    Information processing and management. 45(2009) no.1, S.159-174
  5. MacFarlane, A.: On open source IR (2003) 0.00
    0.0045768693 = product of:
      0.032038085 = sum of:
        0.008071727 = weight(_text_:information in 2010) [ClassicSimilarity], result of:
          0.008071727 = score(doc=2010,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.1551638 = fieldWeight in 2010, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=2010)
        0.023966359 = weight(_text_:retrieval in 2010) [ClassicSimilarity], result of:
          0.023966359 = score(doc=2010,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.26736724 = fieldWeight in 2010, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=2010)
      0.14285715 = coord(2/14)
    
    Abstract
    Open source software development is becoming increasingly popular as a way of producing software, due to a number of factors. It is argued in this paper that these factors may have a significant impact on the future of information retrieval (IR) systems, and that it is desirable that these systems are made open to all. Some problems are outlined that may prevent the uptake of open source IR systems and a number of open source IR systems are described.
  6. MacFarlane, A.; McCann, J.A.; Robertson, S.E.: Parallel methods for the generation of partitioned inverted files (2005) 0.00
    0.0044962796 = product of:
      0.031473957 = sum of:
        0.0060537956 = weight(_text_:information in 651) [ClassicSimilarity], result of:
          0.0060537956 = score(doc=651,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.116372846 = fieldWeight in 651, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=651)
        0.025420163 = weight(_text_:retrieval in 651) [ClassicSimilarity], result of:
          0.025420163 = score(doc=651,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.2835858 = fieldWeight in 651, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=651)
      0.14285715 = coord(2/14)
    
    Abstract
    Purpose - The generation of inverted indexes is one of the most computationally intensive activities for information retrieval systems: indexing large multi-gigabyte text databases can take many hours or even days to complete. We examine the generation of partitioned inverted files in order to speed up the process of indexing. Two types of index partitions are investigated: TermId and DocId. Design/methodology/approach - We use standard measures used in parallel computing such as speedup and efficiency to examine the computing results and also the space costs of our trial indexing experiments. Findings - The results from runs on both partitioning methods are compared and contrasted, concluding that DocId is the more efficient method. Practical implications - The practical implications are that the DocId partitioning method would in most circumstances be used for distributing inverted file data in a parallel computer, particularly if indexing speed is the primary consideration. Originality/value - The paper is of value to database administrators who manage large-scale text collections, and who need to use parallel computing to implement their text retrieval services.
  7. Lu, W.; MacFarlane, A.; Venuti, F.: Okapi-based XML indexing (2009) 0.00
    0.004274482 = product of:
      0.029921371 = sum of:
        0.008737902 = weight(_text_:information in 3629) [ClassicSimilarity], result of:
          0.008737902 = score(doc=3629,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.16796975 = fieldWeight in 3629, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3629)
        0.021183468 = weight(_text_:retrieval in 3629) [ClassicSimilarity], result of:
          0.021183468 = score(doc=3629,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.23632148 = fieldWeight in 3629, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3629)
      0.14285715 = coord(2/14)
    
    Abstract
    Purpose - Being an important data exchange and information storage standard, XML has generated a great deal of interest and particular attention has been paid to the issue of XML indexing. Clear use cases for structured search in XML have been established. However, most of the research in the area is either based on relational database systems or specialized semi-structured data management systems. This paper aims to propose a method for XML indexing based on the information retrieval (IR) system Okapi. Design/methodology/approach - First, the paper reviews the structure of inverted files and gives an overview of the issues of why this indexing mechanism cannot properly support XML retrieval, using the underlying data structures of Okapi as an example. Then the paper explores a revised method implemented on Okapi using path indexing structures. The paper evaluates these index structures through the metrics of indexing run time, path search run time and space costs using the INEX and Reuters RVC1 collections. Findings - Initial results on the INEX collections show that there is a substantial overhead in space costs for the method, but this increase does not affect run time adversely. Indexing results on differing sized Reuters RVC1 sub-collections show that the increase in space costs with increasing the size of a collection is significant, but in terms of run time the increase is linear. Path search results show sub-millisecond run times, demonstrating minimal overhead for XML search. Practical implications - Overall, the results show the method implemented to support XML search in a traditional IR system such as Okapi is viable. Originality/value - The paper provides useful information on a method for XML indexing based on the IR system Okapi.
  8. Inskip, C.; MacFarlane, A.; Rafferty, P.: Meaning, communication, music : towards a revised communication model (2008) 0.00
    0.0040454194 = product of:
      0.028317936 = sum of:
        0.0071344664 = weight(_text_:information in 2347) [ClassicSimilarity], result of:
          0.0071344664 = score(doc=2347,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.13714671 = fieldWeight in 2347, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2347)
        0.021183468 = weight(_text_:retrieval in 2347) [ClassicSimilarity], result of:
          0.021183468 = score(doc=2347,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.23632148 = fieldWeight in 2347, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2347)
      0.14285715 = coord(2/14)
    
    Abstract
    Purpose - If an information retrieval system is going to be of value to the user then it must give meaning to the information which matches the meaning given to it by the user. The meaning given to music varies according to who is interpreting it - the author/composer, the performer, cataloguer or the listener - and this affects how music is organized and retrieved. This paper aims to examine the meaning of music, how meaning is communicated and suggests this may affect music retrieval. Design/methodology/approach - Musicology is used to define music and examine its functions leading to a discussion of how music has been organised and described. Various ways of establishing the meaning of music are reviewed, focussing on established musical analysis techniques. It is suggested that traditional methods are of limited use with digitised popular music. A discussion of semiotics and a review of semiotic analysis in western art music leads to a discussion of semiotics of popular music and examines ideas of Middleton, Stefani and Tagg. Findings - Agreeing that music exists when communication takes place, a discussion of selected communication models leads to the proposal of a revised version of Tagg's model, adjusting it to include listener feedback. Originality/value - The outcome of the analysis is a revised version of Tagg's communication model, adapted to reflect user feedback. It is suggested that this revised communication model reflects the way in which meaning is given to music.
  9. Inskip, C.; Butterworth, R.; MacFarlane, A.: ¬A study of the information needs of the users of a folk music library and the implications for the design of a digital library system (2008) 0.00
    9.5338316E-4 = product of:
      0.013347364 = sum of:
        0.013347364 = weight(_text_:information in 2053) [ClassicSimilarity], result of:
          0.013347364 = score(doc=2053,freq=14.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.256578 = fieldWeight in 2053, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2053)
      0.071428575 = coord(1/14)
    
    Abstract
    A qualitative study of user information needs is reported, based on a purposive sample of users and potential users of the Vaughan Williams Memorial Library, a small specialist folk music library in North London. The study set out to establish what the users' (both existing and potential) information needs are, so that the library's online service may take them into account with its design. The information needs framework proposed by Nicholas [Nicholas, D. (2000) Assessing information needs: tools, techniques and concepts for the internet age. London: ASLIB] is used as an analytical tool to achieve this end. The demographics of the users were examined in order to establish four user groups: Performer, Academic, Professional and Enthusiast. Important information needs were found to be based on social interaction, and key resources of the library were its staff, the concentration of the collection and the library's social nature. A collection of broad design requirements are proposed based on the analysis and this study also provides some insights into the issue of musical relevance, which are discussed.
    Source
    Information processing and management. 44(2008) no.2, S.647-662