Search (9 results, page 1 of 1)

  • × author_ss:"Dutta, B."
  1. Adhikari, A.; Dutta, B.; Dutta, A.; Mondal, D.; Singh, S.: ¬An intrinsic information content-based semantic similarity measure considering the disjoint common subsumers of concepts of an ontology (2018) 0.00
    0.0025370158 = product of:
      0.0050740317 = sum of:
        0.0050740317 = product of:
          0.010148063 = sum of:
            0.010148063 = weight(_text_:a in 4372) [ClassicSimilarity], result of:
              0.010148063 = score(doc=4372,freq=18.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.19109234 = fieldWeight in 4372, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4372)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Finding similarity between concepts based on semantics has become a new trend in many applications (e.g., biomedical informatics, natural language processing). Measuring the Semantic Similarity (SS) with higher accuracy is a challenging task. In this context, the Information Content (IC)-based SS measure has gained popularity over the others. The notion of IC evolves from the science of information theory. Information theory has very high potential to characterize the semantics of concepts. Designing an IC-based SS framework comprises (i) an IC calculator, and (ii) an SS calculator. In this article, we propose a generic intrinsic IC-based SS calculator. We also introduce here a new structural aspect of an ontology called DCS (Disjoint Common Subsumers) that plays a significant role in deciding the similarity between two concepts. We evaluated our proposed similarity calculator with the existing intrinsic IC-based similarity calculators, as well as corpora-dependent similarity calculators using several benchmark data sets. The experimental results show that the proposed similarity calculator produces a high correlation with human evaluation over the existing state-of-the-art IC-based similarity calculators.
    Type
    a
  2. Varadarajan, U.; Dutta, B.: Models for narrative information : a study (2022) 0.00
    0.0024857575 = product of:
      0.004971515 = sum of:
        0.004971515 = product of:
          0.00994303 = sum of:
            0.00994303 = weight(_text_:a in 1102) [ClassicSimilarity], result of:
              0.00994303 = score(doc=1102,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18723148 = fieldWeight in 1102, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1102)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    From the literature study, it was observed that there are significantly fewer studies that review ontology-based narrative models. This motivates the current work. A parametric approach was adopted to report the existing ontology-driven models for narrative information. The work considers the narrative and ontology components as parameters. This study hopes to encompass the relevant literature and ontology models together. The work adopts a systematic literature review methodology for an extensive literature selection. The models were selected from the literature using a stratified random sampling technique. The findings illustrate an overview of the narrative models across domains. The study identifies the differences and similarities of knowledge representation in ontology-based narrative information models. This paper will explore the basic concepts and top-level concepts in the models. Besides, this study provides a study of the narrative theories in the context of ongoing research. It also identifies the state-of-the-art literature for ontology-based narrative information.
    Type
    a
  3. Madalli, D.P.; Chatterjee, U.; Dutta, B.: ¬An analytical approach to building a core ontology for food (2017) 0.00
    0.0024392908 = product of:
      0.0048785815 = sum of:
        0.0048785815 = product of:
          0.009757163 = sum of:
            0.009757163 = weight(_text_:a in 3362) [ClassicSimilarity], result of:
              0.009757163 = score(doc=3362,freq=26.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18373153 = fieldWeight in 3362, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3362)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose The purpose of this paper is to demonstrate the construction of a core ontology for food. To construct the core ontology, the authors propose here an approach called, yet another methodology for ontology plus (YAMO+). The goal is to exhibit the construction of a core ontology for a domain, which can be further extended and converted into application ontologies. Design/methodology/approach To motivate the construction of the core ontology for food, the authors have first articulated a set of application scenarios. The idea is that the constructed core ontology can be used to build application-specific ontologies for those scenarios. As part of the developmental approach to core ontology, the authors have proposed a methodology called YAMO+. It is designed following the theory of analytico-synthetic classification. YAMO+ is generic in nature and can be applied to build core ontologies for any domain. Findings Construction of a core ontology needs a thorough understanding of the domain and domain requirements. There are various challenges involved in constructing a core ontology as discussed in this paper. The proposed approach has proven to be sturdy enough to face the challenges that the construction of a core ontology poses. It is observed that core ontology is amenable to conversion to an application ontology. Practical implications The constructed core ontology for domain food can be readily used for developing application ontologies related to food. The proposed methodology YAMO+ can be applied to build core ontologies for any domain. Originality/value As per the knowledge, the proposed approach is the first attempt based on the study of the state of the art literature, in terms of, a formal approach to the design of a core ontology. Also, the constructed core ontology for food is the first one as there is no such ontology available on the web for domain food.
    Type
    a
  4. Satija, M.P.; Madalli, D.P.; Dutta, B.: Modes of growth of subjects (2014) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 1383) [ClassicSimilarity], result of:
              0.008118451 = score(doc=1383,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 1383, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1383)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We define knowledge as a system in a perpetually dynamic continuum. Knowledge grows as it is always fragmentary, though quantifying this growth is nearly impossible. Growth, inherent in the nature of knowledge, is natural, planned, and induced. S.R. Ranganathan elucidated the various modes of growth of subjects, viz. fission, lamination, loose assemblage, fusion, distillation, partial comprehensions, and subject bundles. The present study adds a few more modes of developments of subjects. We describe and fit these modes of growth in the framework of growth by specialization, interdisciplinary and multidisciplinary growths. We also examine emergence of online domains such as web directories and focus on possible modes of formation of such domains. The paper concludes that new modes may emerge in the future in consonance with the new research trends and ever-changing social needs.
    Type
    a
  5. Giunchiglia, F.; Maltese, V.; Dutta, B.: Domains and context : first steps towards managing diversity in knowledge (2011) 0.00
    0.0018909799 = product of:
      0.0037819599 = sum of:
        0.0037819599 = product of:
          0.0075639198 = sum of:
            0.0075639198 = weight(_text_:a in 603) [ClassicSimilarity], result of:
              0.0075639198 = score(doc=603,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.14243183 = fieldWeight in 603, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=603)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Despite the progress made, one of the main barriers towards the use of semantics is the lack of background knowledge. Dealing with this problem has turned out to be a very difficult task because on the one hand the background knowledge should be very large and virtually unbound and, on the other hand, it should be context sensitive and able to capture the diversity of the world, for instance in terms of language and knowledge. Our proposed solution consists in addressing the problem in three steps: (1) create an extensible diversity-aware knowledge base providing a continuously growing quantity of properly organized knowledge; (2) given the problem, build at run-time the proper context within which perform the reasoning; (3) solve the problem. Our work is based on two key ideas. The first is that of using domains, i.e. a general semantic-aware methodology and technique for structuring the background knowledge. The second is that of building the context of reasoning by a suitable combination of domains. Our goal in this paper is to introduce the overall approach, show how it can be applied to an important use case, i.e. the matching of classifications, and describe our first steps towards the construction of a large scale diversity-aware knowledge base.
  6. Sinha, P.K.; Dutta, B.: ¬A systematic analysis of flood ontologies : a parametric approach (2020) 0.00
    0.0018909799 = product of:
      0.0037819599 = sum of:
        0.0037819599 = product of:
          0.0075639198 = sum of:
            0.0075639198 = weight(_text_:a in 5758) [ClassicSimilarity], result of:
              0.0075639198 = score(doc=5758,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.14243183 = fieldWeight in 5758, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5758)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The article identifies the core literature available on flood ontologies and presents a review on these ontologies from various perspectives like its purpose, type, design methodologies, ontologies (re)used, and also their focus on specific flood disaster phases. The study was conducted in two stages: i) literature identification, where the systematic literature review methodology was employed; and, ii) ontological review, where the parametric approach was applied. The study resulted in a set of fourteen papers discussing the flood ontology (FO). The ontological review revealed that most of the flood ontologies were task ontologies, formal, modular, and used web ontology language (OWL) for their representation. The most (re)used ontologies were SWEET, SSN, Time, and Space. METHONTOLOGY was the preferred design methodology, and for evaluation, application-based or data-based approaches were preferred. The majority of the ontologies were built around the response phase of the disaster. The unavailability of the full ontologies somewhat restricted the current study as the structural ontology metrics are missing. But the scientific community, the developers, of flood disaster management systems can refer to this work for their research to see what is available in the literature on flood ontology and the other major domains essential in building the FO.
    Type
    a
  7. Bardhan, S.; Dutta, B.: ONCO: an ontology model for MOOC platforms (2022) 0.00
    0.001757696 = product of:
      0.003515392 = sum of:
        0.003515392 = product of:
          0.007030784 = sum of:
            0.007030784 = weight(_text_:a in 1111) [ClassicSimilarity], result of:
              0.007030784 = score(doc=1111,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.13239266 = fieldWeight in 1111, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1111)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In the process of searching for a particular course on e-learning platforms, it is required to browse through different platforms, and it becomes a time-consuming process. To resolve the issue, an ontology has been developed that can provide single-point access to all the e-learning platforms. The modelled ONline Course Ontology (ONCO) is based on YAMO, METHONTOLOGY and IDEF5 and built on the Protégé ontology editing tool. ONCO is integrated with sample data and later evaluated using pre-defined competency questions. Complex SPARQL queries are executed to identify the effectiveness of the constructed ontology. The modelled ontology is able to retrieve all the sampled queries. The ONCO has been developed for the efficient retrieval of similar courses from massive open online course (MOOC) platforms.
    Type
    a
  8. Giunchiglia, F.; Dutta, B.; Maltese, V.: From knowledge organization to knowledge representation (2014) 0.00
    0.0016913437 = product of:
      0.0033826875 = sum of:
        0.0033826875 = product of:
          0.006765375 = sum of:
            0.006765375 = weight(_text_:a in 1369) [ClassicSimilarity], result of:
              0.006765375 = score(doc=1369,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12739488 = fieldWeight in 1369, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1369)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    So far, within the library and information science (LIS) community, knowledge organization (KO) has developed its own very successful solutions to document search, allowing for the classification, indexing and search of millions of books. However, current KO solutions are limited in expressivity as they only support queries by document properties, e.g., by title, author and subject. In parallel, within the artificial intelligence and semantic web communities, knowledge representation (KR) has developed very powerful end expressive techniques, which via the use of ontologies support queries by any entity property (e.g., the properties of the entities described in a document). However, KR has not scaled yet to the level of KO, mainly because of the lack of a precise and scalable entity specification methodology. In this paper we present DERA, a new methodology inspired by the faceted approach, as introduced in KO, that retains all the advantages of KR and compensates for the limitations of KO. DERA guarantees at the same time quality, extensibility, scalability and effectiveness in search.
    Type
    a
  9. Dutta, B.: Ranganathan's elucidation of subject in the light of 'Infinity (8)' (2015) 0.00
    0.0011959607 = product of:
      0.0023919214 = sum of:
        0.0023919214 = product of:
          0.0047838427 = sum of:
            0.0047838427 = weight(_text_:a in 2794) [ClassicSimilarity], result of:
              0.0047838427 = score(doc=2794,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.090081796 = fieldWeight in 2794, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2794)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper reviews Ranganathan's description of subject from mathematical angle. Ranganathan was highly influenced by Nineteenth Century mathematician George Cantor and he used the concept of infinity in developing an axiomatic interpretation of subject. Majority of library scientists interpreted the concept of subject merely as a term or descriptor or heading to include the same in cataloguing and subject indexing. Some library scientists interpreted subject on the basis of document, i.e. from the angle of the concept of aboutness or epistemological potential of the document etc. Some people explained subject from the viewpoint of social, cultural or socio-cultural process. Attempts were made to describe subject from epistemological viewpoint. But S R Ranganathan was the first to develop an axiomatic concept of subject on its own. He built up an independent idea of subject that is ubiquitously pervasive with human cognition process. To develop the basic foundation of subject, he used the mathematical concepts of infinity and infinitesimal and construed the set of subjects or universe of subjects as continuous infinite universe. The subject may also exist in extremely micro-form, which was termed as spot subject and analogized with point, which is dimensionless having only an existence. The influence of Twentieth Century physicist George Gamow on Ranganathan's thought has also been discussed.
    Type
    a