Search (10 results, page 1 of 1)

  • × author_ss:"Mizzaro, S."
  1. Della Mea, V.; Mizzaro, S.: Measuring retrieval effectiveness : a new proposal and a first experimental validation (2004) 0.00
    0.00334869 = product of:
      0.00669738 = sum of:
        0.00669738 = product of:
          0.01339476 = sum of:
            0.01339476 = weight(_text_:a in 2263) [ClassicSimilarity], result of:
              0.01339476 = score(doc=2263,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.25222903 = fieldWeight in 2263, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2263)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Most common effectiveness measures for information retrieval systems are based an the assumptions of binary relevance (either a document is relevant to a given query or it is not) and binary retrieval (either a document is retrieved or it is not). In this article, these assumptions are questioned, and a new measure named ADM (average distance measure) is proposed, discussed from a conceptual point of view, and experimentally validated an Text Retrieval Conference (TREC) data. Both conceptual analysis and experimental evidence demonstrate ADM's adequacy in measuring the effectiveness of information retrieval systems. Some potential problems about precision and recall are also highlighted and discussed.
    Type
    a
  2. Mizzaro, S.: Quality control in scholarly publishing : a new proposal (2003) 0.00
    0.0028703054 = product of:
      0.005740611 = sum of:
        0.005740611 = product of:
          0.011481222 = sum of:
            0.011481222 = weight(_text_:a in 1810) [ClassicSimilarity], result of:
              0.011481222 = score(doc=1810,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2161963 = fieldWeight in 1810, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1810)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Internet has fostered a faster, more interactive and effective model of scholarly publishing. However, as the quantity of information available is constantly increasing, its quality is threatened, since the traditional quality control mechanism of peer review is often not used (e.g., in online repositories of preprints, and by people publishing whatever they want an their Web pages). This paper describes a new kind of electronic scholarly journal, in which the standard submission-reviewpublication process is replaced by a more sophisticated approach, based an judgments expressed by the readers: in this way, each reader is, potentially, a peer reviewer. New ingredients, not found in similar approaches, are that each reader's judgment is weighted an the basis of the reader's skills as a reviewer, and that readers are encouraged to express correct judgments by a feedback mechanism that estimates their own quality. The new electronic scholarly journal is described in both intuitive and formal ways. Its effectiveness is tested by several laboratory experiments that simulate what might happen if the system were deployed and used.
    Type
    a
  3. Mizzaro, S.: Relevance: the whole history (1997) 0.00
    0.00270615 = product of:
      0.0054123 = sum of:
        0.0054123 = product of:
          0.0108246 = sum of:
            0.0108246 = weight(_text_:a in 1003) [ClassicSimilarity], result of:
              0.0108246 = score(doc=1003,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20383182 = fieldWeight in 1003, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1003)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Presents the history of relevance through an exhaustive review of the literature. Defines a framework for establishing a common ground, and illustrates the history via the presentation in chronological order of the papers on relevance. The history is divided into 3 periods (before 1958, 1959-1976, and 1977-present) and, inside each period, analyzes papers on relevancee under 7 different aspects
    Footnote
    Contribution to part 2 of a 2 part series on the history of documentation and information science
    Type
    a
  4. Della Mea, V.; Demartini, G.; Di Gaspero, L.; Mizzaro, S.: Measuring retrieval effectiveness with Average Distance Measure (ADM) (2006) 0.00
    0.0026473717 = product of:
      0.0052947435 = sum of:
        0.0052947435 = product of:
          0.010589487 = sum of:
            0.010589487 = weight(_text_:a in 774) [ClassicSimilarity], result of:
              0.010589487 = score(doc=774,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.19940455 = fieldWeight in 774, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=774)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Most common effectiveness measures for information retrieval systems are based on the assumptions of binary relevance (either a document is relevant to a given query or it is not) and binary retrieval (either a document is retrieved or it is not). In this paper, we describe an information retrieval effectiveness measure named ADM (Average Distance Measure) that questions these assumptions. We compare ADM with other measures, discuss it from a conceptual point of view, and report some experimental results. Both conceptual analysis and experimental evidence demonstrate ADM adequacy in measuring the effectiveness of information retrieval systems.
    Type
    a
  5. Mizzaro, S.: How many relevances in information retrieval? (1998) 0.00
    0.0023919214 = product of:
      0.0047838427 = sum of:
        0.0047838427 = product of:
          0.009567685 = sum of:
            0.009567685 = weight(_text_:a in 3799) [ClassicSimilarity], result of:
              0.009567685 = score(doc=3799,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18016359 = fieldWeight in 3799, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3799)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Classifies various kinds of relevance in information retrieval in a formally defined 4 dimensional space. Such classification helps understanding of the nature of relevance and relevance judgement. Analyzes the consequences of this classification on the design and evaluation of information retrieval systems
    Type
    a
  6. Carpineto, C.; Mizzaro, S.; Romano, G.; Snidero, M.: Mobile information retrieval with search results clustering : prototypes and evaluations (2009) 0.00
    0.0022374375 = product of:
      0.004474875 = sum of:
        0.004474875 = product of:
          0.00894975 = sum of:
            0.00894975 = weight(_text_:a in 2793) [ClassicSimilarity], result of:
              0.00894975 = score(doc=2793,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1685276 = fieldWeight in 2793, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2793)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Web searches from mobile devices such as PDAs and cell phones are becoming increasingly popular. However, the traditional list-based search interface paradigm does not scale well to mobile devices due to their inherent limitations. In this article, we investigate the application of search results clustering, used with some success for desktop computer searches, to the mobile scenario. Building on CREDO (Conceptual Reorganization of Documents), a Web clustering engine based on concept lattices, we present its mobile versions Credino and SmartCREDO, for PDAs and cell phones, respectively. Next, we evaluate the retrieval performance of the three prototype systems. We measure the effectiveness of their clustered results compared to a ranked list of results on a subtopic retrieval task, by means of the device-independent notion of subtopic reach time together with a reusable test collection built from Wikipedia ambiguous entries. Then, we make a cross-comparison of methods (i.e., clustering and ranked list) and devices (i.e., desktop, PDA, and cell phone), using an interactive information-finding task performed by external participants. The main finding is that clustering engines are a viable complementary approach to plain search engines both for desktop and mobile searches especially, but not only, for multitopic informational queries.
    Type
    a
  7. Mizzaro, S.: Readersourcing - a manifesto (2012) 0.00
    0.0020506454 = product of:
      0.004101291 = sum of:
        0.004101291 = product of:
          0.008202582 = sum of:
            0.008202582 = weight(_text_:a in 356) [ClassicSimilarity], result of:
              0.008202582 = score(doc=356,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1544581 = fieldWeight in 356, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=356)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This position paper analyzes the current situation in scholarly publishing and peer review practices and presents three theses: (a) we are going to run out of peer reviewers; (b) it is possible to replace referees with readers, an approach that I have named "Readersourcing"; and (c) it is possible to avoid potential weaknesses in the Readersourcing model by adopting an appropriate quality control mechanism. The readersourcing.org system is then presented as an independent, third-party, nonprofit, and academic/scientific endeavor aimed at quality rating of scholarly literature and scholars, and some possible criticisms are discussed.
    Type
    a
  8. Brajnik, G.; Mizzaro, S.; Tasso, C.; Venuti, F.: Strategic help in user interfaces for information retrieval (2002) 0.00
    0.0018909799 = product of:
      0.0037819599 = sum of:
        0.0037819599 = product of:
          0.0075639198 = sum of:
            0.0075639198 = weight(_text_:a in 5203) [ClassicSimilarity], result of:
              0.0075639198 = score(doc=5203,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.14243183 = fieldWeight in 5203, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5203)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Brajnik et alia describe their view of an effective retrieval interface, one which coaches the searcher using stored knowledge not only of database structure, but of strategic situations which are likely to occur, such as repeating failed tactics in a low return search, or failing to try relevance feedback techniques. The emphasis is on the system suggesting search strategy improvements by relating them to an analysis of work entered so far and selecting and ranking those found relevant. FIRE is an interface utilizing these techniques. It allows the user to assign documents to useful, topical and trash folders, maintains thesauri files automatically searchable on query terms, and it builds, using user entries and a rule system, a picture of the retrieval situation from which it generates suggestions. Six participants used FIRE in INSPEC20K database searches, two for their own information needs and four needs provided by the authors. Satisfaction was measured in a structured post search interview, behavior by log analysis, and performance by recall and precision in the canned searches. Participants found the suggestions helpful, but insisted they would have taken those approaches without such assistance. Users took the suggestions offered and preferred those demanding the least effort.
    Type
    a
  9. Sabbata, S. De; Mizzaro, S.; Reichenbacher, T.: Geographic dimensions of relevance (2015) 0.00
    0.0018909799 = product of:
      0.0037819599 = sum of:
        0.0037819599 = product of:
          0.0075639198 = sum of:
            0.0075639198 = weight(_text_:a in 2137) [ClassicSimilarity], result of:
              0.0075639198 = score(doc=2137,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.14243183 = fieldWeight in 2137, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2137)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose The purpose of this paper is to discuss the emerging geographic features of current concepts of relevance, and to improve, modify, and extend the framework proposed by Mizzaro (1998). The objective is to define a new framework able to account, more completely and precisely, for the notions of relevance involved in mobile information seeking scenarios. Design/methodology/approach The authors formalise two new dimensions of relevance. The first dimension emphasises the spatio-temporal nature of the information seeking process. The second dimension allows us to describe how different concepts of relevance rely on different abstractions of reality. Findings The new framework allows: to conceptualise the point in space and time at which a given notion of relevance refers to; to conceptualise the level of abstraction taken into account by a given notion of relevance; and to include widely adopted facets (e.g. users mobility, preferences, and social context) in the classification of notions of relevance. Originality/value The conceptual discussion presented in this paper contributes to the future development of relevance in the scope of mobile information seeking scenarios. The authors provide a more comprehensive framework for conceptualization, development, and classification of notions of relevance in the field of information retrieval and location-based services.
    Type
    a
  10. Crestani, F.; Mizzaro, S.; Scagnetto, I,: Mobile information retrieval (2017) 0.00
    0.0011959607 = product of:
      0.0023919214 = sum of:
        0.0023919214 = product of:
          0.0047838427 = sum of:
            0.0047838427 = weight(_text_:a in 4469) [ClassicSimilarity], result of:
              0.0047838427 = score(doc=4469,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.090081796 = fieldWeight in 4469, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4469)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This book offers a helpful starting point in the scattered, rich, and complex body of literature on Mobile Information Retrieval (Mobile IR), reviewing more than 200 papers in nine chapters. Highlighting the most interesting and influential contributions that have appeared in recent years, it particularly focuses on both user interaction and techniques for the perception and use of context, which, taken together, shape much of today's research on Mobile IR. The book starts by addressing the differences between IR and Mobile IR, while also reviewing the foundations of Mobile IR research. It then examines the different kinds of documents, users, and information needs that can be found in Mobile IR, and which set it apart from standard IR. Next, it discusses the two important issues of user interfaces and context-awareness. In closing, it covers issues related to the evaluation of Mobile IR applications. Overall, the book offers a valuable tool, helping new and veteran researchers alike to navigate this exciting and highly dynamic area of research.