Search (5 results, page 1 of 1)

  • × author_ss:"Mizzaro, S."
  • × language_ss:"e"
  • × year_i:[2000 TO 2010}
  1. Della Mea, V.; Mizzaro, S.: Measuring retrieval effectiveness : a new proposal and a first experimental validation (2004) 0.01
    0.008970889 = product of:
      0.026912667 = sum of:
        0.026912667 = product of:
          0.080738 = sum of:
            0.080738 = weight(_text_:retrieval in 2263) [ClassicSimilarity], result of:
              0.080738 = score(doc=2263,freq=10.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.5231199 = fieldWeight in 2263, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2263)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Most common effectiveness measures for information retrieval systems are based an the assumptions of binary relevance (either a document is relevant to a given query or it is not) and binary retrieval (either a document is retrieved or it is not). In this article, these assumptions are questioned, and a new measure named ADM (average distance measure) is proposed, discussed from a conceptual point of view, and experimentally validated an Text Retrieval Conference (TREC) data. Both conceptual analysis and experimental evidence demonstrate ADM's adequacy in measuring the effectiveness of information retrieval systems. Some potential problems about precision and recall are also highlighted and discussed.
  2. Della Mea, V.; Demartini, G.; Di Gaspero, L.; Mizzaro, S.: Measuring retrieval effectiveness with Average Distance Measure (ADM) (2006) 0.01
    0.008970889 = product of:
      0.026912667 = sum of:
        0.026912667 = product of:
          0.080738 = sum of:
            0.080738 = weight(_text_:retrieval in 774) [ClassicSimilarity], result of:
              0.080738 = score(doc=774,freq=10.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.5231199 = fieldWeight in 774, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=774)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Most common effectiveness measures for information retrieval systems are based on the assumptions of binary relevance (either a document is relevant to a given query or it is not) and binary retrieval (either a document is retrieved or it is not). In this paper, we describe an information retrieval effectiveness measure named ADM (Average Distance Measure) that questions these assumptions. We compare ADM with other measures, discuss it from a conceptual point of view, and report some experimental results. Both conceptual analysis and experimental evidence demonstrate ADM adequacy in measuring the effectiveness of information retrieval systems.
  3. Brajnik, G.; Mizzaro, S.; Tasso, C.; Venuti, F.: Strategic help in user interfaces for information retrieval (2002) 0.00
    0.0049634436 = product of:
      0.014890331 = sum of:
        0.014890331 = product of:
          0.04467099 = sum of:
            0.04467099 = weight(_text_:retrieval in 5203) [ClassicSimilarity], result of:
              0.04467099 = score(doc=5203,freq=6.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.28943354 = fieldWeight in 5203, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5203)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Brajnik et alia describe their view of an effective retrieval interface, one which coaches the searcher using stored knowledge not only of database structure, but of strategic situations which are likely to occur, such as repeating failed tactics in a low return search, or failing to try relevance feedback techniques. The emphasis is on the system suggesting search strategy improvements by relating them to an analysis of work entered so far and selecting and ranking those found relevant. FIRE is an interface utilizing these techniques. It allows the user to assign documents to useful, topical and trash folders, maintains thesauri files automatically searchable on query terms, and it builds, using user entries and a rule system, a picture of the retrieval situation from which it generates suggestions. Six participants used FIRE in INSPEC20K database searches, two for their own information needs and four needs provided by the authors. Satisfaction was measured in a structured post search interview, behavior by log analysis, and performance by recall and precision in the canned searches. Participants found the suggestions helpful, but insisted they would have taken those approaches without such assistance. Users took the suggestions offered and preferred those demanding the least effort.
  4. Carpineto, C.; Mizzaro, S.; Romano, G.; Snidero, M.: Mobile information retrieval with search results clustering : prototypes and evaluations (2009) 0.00
    0.0049634436 = product of:
      0.014890331 = sum of:
        0.014890331 = product of:
          0.04467099 = sum of:
            0.04467099 = weight(_text_:retrieval in 2793) [ClassicSimilarity], result of:
              0.04467099 = score(doc=2793,freq=6.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.28943354 = fieldWeight in 2793, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2793)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Web searches from mobile devices such as PDAs and cell phones are becoming increasingly popular. However, the traditional list-based search interface paradigm does not scale well to mobile devices due to their inherent limitations. In this article, we investigate the application of search results clustering, used with some success for desktop computer searches, to the mobile scenario. Building on CREDO (Conceptual Reorganization of Documents), a Web clustering engine based on concept lattices, we present its mobile versions Credino and SmartCREDO, for PDAs and cell phones, respectively. Next, we evaluate the retrieval performance of the three prototype systems. We measure the effectiveness of their clustered results compared to a ranked list of results on a subtopic retrieval task, by means of the device-independent notion of subtopic reach time together with a reusable test collection built from Wikipedia ambiguous entries. Then, we make a cross-comparison of methods (i.e., clustering and ranked list) and devices (i.e., desktop, PDA, and cell phone), using an interactive information-finding task performed by external participants. The main finding is that clustering engines are a viable complementary approach to plain search engines both for desktop and mobile searches especially, but not only, for multitopic informational queries.
  5. Mizzaro, S.: Quality control in scholarly publishing : a new proposal (2003) 0.00
    0.0034615172 = product of:
      0.010384551 = sum of:
        0.010384551 = product of:
          0.031153653 = sum of:
            0.031153653 = weight(_text_:online in 1810) [ClassicSimilarity], result of:
              0.031153653 = score(doc=1810,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.20118743 = fieldWeight in 1810, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1810)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The Internet has fostered a faster, more interactive and effective model of scholarly publishing. However, as the quantity of information available is constantly increasing, its quality is threatened, since the traditional quality control mechanism of peer review is often not used (e.g., in online repositories of preprints, and by people publishing whatever they want an their Web pages). This paper describes a new kind of electronic scholarly journal, in which the standard submission-reviewpublication process is replaced by a more sophisticated approach, based an judgments expressed by the readers: in this way, each reader is, potentially, a peer reviewer. New ingredients, not found in similar approaches, are that each reader's judgment is weighted an the basis of the reader's skills as a reviewer, and that readers are encouraged to express correct judgments by a feedback mechanism that estimates their own quality. The new electronic scholarly journal is described in both intuitive and formal ways. Its effectiveness is tested by several laboratory experiments that simulate what might happen if the system were deployed and used.