Search (5 results, page 1 of 1)

  • × author_ss:"Mizzaro, S."
  1. Mizzaro, S.: How many relevances in information retrieval? (1998) 0.01
    0.013613109 = product of:
      0.027226217 = sum of:
        0.027226217 = product of:
          0.054452434 = sum of:
            0.054452434 = weight(_text_:systems in 3799) [ClassicSimilarity], result of:
              0.054452434 = score(doc=3799,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.339541 = fieldWeight in 3799, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3799)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Classifies various kinds of relevance in information retrieval in a formally defined 4 dimensional space. Such classification helps understanding of the nature of relevance and relevance judgement. Analyzes the consequences of this classification on the design and evaluation of information retrieval systems
  2. Della Mea, V.; Mizzaro, S.: Measuring retrieval effectiveness : a new proposal and a first experimental validation (2004) 0.01
    0.013476291 = product of:
      0.026952581 = sum of:
        0.026952581 = product of:
          0.053905163 = sum of:
            0.053905163 = weight(_text_:systems in 2263) [ClassicSimilarity], result of:
              0.053905163 = score(doc=2263,freq=4.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.33612844 = fieldWeight in 2263, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2263)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Most common effectiveness measures for information retrieval systems are based an the assumptions of binary relevance (either a document is relevant to a given query or it is not) and binary retrieval (either a document is retrieved or it is not). In this article, these assumptions are questioned, and a new measure named ADM (average distance measure) is proposed, discussed from a conceptual point of view, and experimentally validated an Text Retrieval Conference (TREC) data. Both conceptual analysis and experimental evidence demonstrate ADM's adequacy in measuring the effectiveness of information retrieval systems. Some potential problems about precision and recall are also highlighted and discussed.
  3. Della Mea, V.; Demartini, G.; Di Gaspero, L.; Mizzaro, S.: Measuring retrieval effectiveness with Average Distance Measure (ADM) (2006) 0.01
    0.013476291 = product of:
      0.026952581 = sum of:
        0.026952581 = product of:
          0.053905163 = sum of:
            0.053905163 = weight(_text_:systems in 774) [ClassicSimilarity], result of:
              0.053905163 = score(doc=774,freq=4.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.33612844 = fieldWeight in 774, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=774)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Most common effectiveness measures for information retrieval systems are based on the assumptions of binary relevance (either a document is relevant to a given query or it is not) and binary retrieval (either a document is retrieved or it is not). In this paper, we describe an information retrieval effectiveness measure named ADM (Average Distance Measure) that questions these assumptions. We compare ADM with other measures, discuss it from a conceptual point of view, and report some experimental results. Both conceptual analysis and experimental evidence demonstrate ADM adequacy in measuring the effectiveness of information retrieval systems.
  4. Crestani, F.; Mizzaro, S.; Scagnetto, I,: Mobile information retrieval (2017) 0.01
    0.009625921 = product of:
      0.019251842 = sum of:
        0.019251842 = product of:
          0.038503684 = sum of:
            0.038503684 = weight(_text_:systems in 4469) [ClassicSimilarity], result of:
              0.038503684 = score(doc=4469,freq=4.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.24009174 = fieldWeight in 4469, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4469)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    LCSH
    User interfaces (Computer systems)
    Subject
    User interfaces (Computer systems)
  5. Carpineto, C.; Mizzaro, S.; Romano, G.; Snidero, M.: Mobile information retrieval with search results clustering : prototypes and evaluations (2009) 0.01
    0.0068065543 = product of:
      0.013613109 = sum of:
        0.013613109 = product of:
          0.027226217 = sum of:
            0.027226217 = weight(_text_:systems in 2793) [ClassicSimilarity], result of:
              0.027226217 = score(doc=2793,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.1697705 = fieldWeight in 2793, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2793)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Web searches from mobile devices such as PDAs and cell phones are becoming increasingly popular. However, the traditional list-based search interface paradigm does not scale well to mobile devices due to their inherent limitations. In this article, we investigate the application of search results clustering, used with some success for desktop computer searches, to the mobile scenario. Building on CREDO (Conceptual Reorganization of Documents), a Web clustering engine based on concept lattices, we present its mobile versions Credino and SmartCREDO, for PDAs and cell phones, respectively. Next, we evaluate the retrieval performance of the three prototype systems. We measure the effectiveness of their clustered results compared to a ranked list of results on a subtopic retrieval task, by means of the device-independent notion of subtopic reach time together with a reusable test collection built from Wikipedia ambiguous entries. Then, we make a cross-comparison of methods (i.e., clustering and ranked list) and devices (i.e., desktop, PDA, and cell phone), using an interactive information-finding task performed by external participants. The main finding is that clustering engines are a viable complementary approach to plain search engines both for desktop and mobile searches especially, but not only, for multitopic informational queries.