Search (5 results, page 1 of 1)

  • × author_ss:"Mostafa, J."
  • × year_i:[2000 TO 2010}
  1. Quiroga, L.M.; Mostafa, J.: ¬An experiment in building profiles in information filtering : the role of context of user relevance feedback (2002) 0.04
    0.035790663 = product of:
      0.053685993 = sum of:
        0.037639882 = weight(_text_:resources in 2579) [ClassicSimilarity], result of:
          0.037639882 = score(doc=2579,freq=2.0), product of:
            0.18665522 = queryWeight, product of:
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.051133685 = queryNorm
            0.20165458 = fieldWeight in 2579, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2579)
        0.016046109 = product of:
          0.032092217 = sum of:
            0.032092217 = weight(_text_:management in 2579) [ClassicSimilarity], result of:
              0.032092217 = score(doc=2579,freq=2.0), product of:
                0.17235184 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.051133685 = queryNorm
                0.18620178 = fieldWeight in 2579, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2579)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    An experiment was conducted to see how relevance feedback could be used to build and adjust profiles to improve the performance of filtering systems. Data was collected during the system interaction of 18 graduate students with SIFTER (Smart Information Filtering Technology for Electronic Resources), a filtering system that ranks incoming information based on users' profiles. The data set came from a collection of 6000 records concerning consumer health. In the first phase of the study, three different modes of profile acquisition were compared. The explicit mode allowed users to directly specify the profile; the implicit mode utilized relevance feedback to create and refine the profile; and the combined mode allowed users to initialize the profile and to continuously refine it using relevance feedback. Filtering performance, measured in terms of Normalized Precision, showed that the three approaches were significantly different ( [small alpha, Greek] =0.05 and p =0.012). The explicit mode of profile acquisition consistently produced superior results. Exclusive reliance on relevance feedback in the implicit mode resulted in inferior performance. The low performance obtained by the implicit acquisition mode motivated the second phase of the study, which aimed to clarify the role of context in relevance feedback judgments. An inductive content analysis of thinking aloud protocols showed dimensions that were highly situational, establishing the importance context plays in feedback relevance assessments. Results suggest the need for better representation of documents, profiles, and relevance feedback mechanisms that incorporate dimensions identified in this research.
    Source
    Information processing and management. 38(2002) no.5, S.671-694
  2. Mostafa, J.: Document search interface design : background and introduction to special topic section (2004) 0.03
    0.026077677 = product of:
      0.078233026 = sum of:
        0.078233026 = weight(_text_:resources in 2503) [ClassicSimilarity], result of:
          0.078233026 = score(doc=2503,freq=6.0), product of:
            0.18665522 = queryWeight, product of:
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.051133685 = queryNorm
            0.4191312 = fieldWeight in 2503, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.046875 = fieldNorm(doc=2503)
      0.33333334 = coord(1/3)
    
    Abstract
    A library user searching for high-quality and authoritative information today is confronted with thousands of resources that cover a wide variety of topics. The heterogeneity factor alone can be a major obstacle for the user to select appropriate resources to search. Depending an the information need, the user may have to navigate among resources that are in different formats (bibliographic versus full-text), are stored in different media (text versus images), have different levels of coverage (news versus scholarly reports), or are published in different languages. Beyond the heterogeneity factor, the user faces specific challenges related to the search experience itself. These factors and their impact an searching can be best described using a fourphase framework, namely: formulation, action, presentation, and refinement (Shneiderman, Byrd, & Croft, 1998). Certain key functions for document search interfaces are described below in the context of these four phases. Following the description, highlights from the contributed papers are discussed.
  3. Mukhopadhyay, S.; Peng, S.; Raje, R.; Mostafa, J.; Palakal, M.: Distributed multi-agent information filtering : a comparative study (2005) 0.01
    0.0064184438 = product of:
      0.01925533 = sum of:
        0.01925533 = product of:
          0.03851066 = sum of:
            0.03851066 = weight(_text_:management in 3559) [ClassicSimilarity], result of:
              0.03851066 = score(doc=3559,freq=2.0), product of:
                0.17235184 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.051133685 = queryNorm
                0.22344214 = fieldWeight in 3559, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3559)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Information filtering is a technique to identify, in large collections, information that is relevant according to some criteria (e.g., a user's personal interests, or a research project objective). As such, it is a key technology for providing efficient user services in any large-scale information infrastructure, e.g., digital libraries. To provide large-scale Information filtering services, both computational and knowledge management issues need to be addressed. A centralized (single-agent) approach to information filtering suffers from serious drawbacks in terms of speed, accuracy, and economic considerations, and becomes unrealistic even for medium-scale applications. In this article, we discuss two distributed (multiagent) information filtering approaches, that are distributed with respect to knowledge or functionality, to overcome the limitations of single-agent centralized information filtering. Large-scale experimental studies involving the weIl-known TREC data set are also presented to illustrate the advantages of distributed filtering as weIl as to compare the different distributed approaches.
  4. Seki, K.; Mostafa, J.: Gene ontology annotation as text categorization : an empirical study (2008) 0.01
    0.005348703 = product of:
      0.016046109 = sum of:
        0.016046109 = product of:
          0.032092217 = sum of:
            0.032092217 = weight(_text_:management in 2123) [ClassicSimilarity], result of:
              0.032092217 = score(doc=2123,freq=2.0), product of:
                0.17235184 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.051133685 = queryNorm
                0.18620178 = fieldWeight in 2123, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2123)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Information processing and management. 44(2008) no.5, S.1754-1770
  5. Mostafa, J.: Bessere Suchmaschinen für das Web (2006) 0.00
    0.0023093028 = product of:
      0.0069279084 = sum of:
        0.0069279084 = product of:
          0.013855817 = sum of:
            0.013855817 = weight(_text_:22 in 4871) [ClassicSimilarity], result of:
              0.013855817 = score(doc=4871,freq=2.0), product of:
                0.17906146 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051133685 = queryNorm
                0.07738023 = fieldWeight in 4871, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=4871)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 1.2006 18:34:49