Search (2 results, page 1 of 1)

  • × author_ss:"Batet, M."
  • × year_i:[2010 TO 2020}
  1. Sánchez, D.; Batet, M.; Valls, A.; Gibert, K.: Ontology-driven web-based semantic similarity (2010) 0.00
    3.2888478E-4 = product of:
      0.0049332716 = sum of:
        0.0049332716 = product of:
          0.009866543 = sum of:
            0.009866543 = weight(_text_:information in 335) [ClassicSimilarity], result of:
              0.009866543 = score(doc=335,freq=8.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.19395474 = fieldWeight in 335, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=335)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Estimation of the degree of semantic similarity/distance between concepts is a very common problem in research areas such as natural language processing, knowledge acquisition, information retrieval or data mining. In the past, many similarity measures have been proposed, exploiting explicit knowledge-such as the structure of a taxonomy-or implicit knowledge-such as information distribution. In the former case, taxonomies and/or ontologies are used to introduce additional semantics; in the latter case, frequencies of term appearances in a corpus are considered. Classical measures based on those premises suffer from some problems: in the ?rst case, their excessive dependency of the taxonomical/ontological structure; in the second case, the lack of semantics of a pure statistical analysis of occurrences and/or the ambiguity of estimating concept statistical distribution from term appearances. Measures based on Information Content (IC) of taxonomical concepts combine both approaches. However, they heavily depend on a properly pre-tagged and disambiguated corpus according to the ontological entities in order to computer accurate concept appearance probabilities. This limits the applicability of those measures to other ontologies - like specific domain ontologies - and massive corpus - like the Web. In this paper, several of the presente issues are analyzed. Modifications of classical similarity measures are also proposed. They are based on a contextualized and scalable version of IC computation in the Web by exploiting taxonomical knowledge. The goal is to avoid the measures' dependency on the corpus pre-processing to achieve reliable results and minimize language ambiguity. Our proposals are able to outperform classical approaches when using the Web for estimating concept probabilities.
    Source
    Journal of intelligent information systems. 35(2010) no.x, S.383-413
  2. Sánchez, D.; Batet, M.: C-sanitized : a privacy model for document redaction and sanitization (2016) 0.00
    3.2888478E-4 = product of:
      0.0049332716 = sum of:
        0.0049332716 = product of:
          0.009866543 = sum of:
            0.009866543 = weight(_text_:information in 2493) [ClassicSimilarity], result of:
              0.009866543 = score(doc=2493,freq=8.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.19395474 = fieldWeight in 2493, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2493)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Vast amounts of information are daily exchanged and/or released. The sensitive nature of much of this information creates a serious privacy threat when documents are uncontrollably made available to untrusted third parties. In such cases, appropriate data protection measures should be undertaken by the responsible organization, especially under the umbrella of current legislation on data privacy. To do so, human experts are usually requested to redact or sanitize document contents. To relieve this burdensome task, this paper presents a privacy model for document redaction/sanitization, which offers several advantages over other models available in the literature. Based on the well-established foundations of data semantics and information theory, our model provides a framework to develop and implement automated and inherently semantic redaction/sanitization tools. Moreover, contrary to ad-hoc redaction methods, our proposal provides a priori privacy guarantees which can be intuitively defined according to current legislations on data privacy. Empirical tests performed within the context of several use cases illustrate the applicability of our model and its ability to mimic the reasoning of human sanitizers.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.1, S.148-163