Search (7 results, page 1 of 1)

  • × author_ss:"Dutta, B."
  • × year_i:[2010 TO 2020}
  1. Dutta, B.: Ranganathan's elucidation of subject in the light of 'Infinity (8)' (2015) 0.00
    0.0037895362 = product of:
      0.026526753 = sum of:
        0.026526753 = weight(_text_:of in 2794) [ClassicSimilarity], result of:
          0.026526753 = score(doc=2794,freq=40.0), product of:
            0.06866331 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.043909185 = queryNorm
            0.38633084 = fieldWeight in 2794, product of:
              6.3245554 = tf(freq=40.0), with freq of:
                40.0 = termFreq=40.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2794)
      0.14285715 = coord(1/7)
    
    Abstract
    This paper reviews Ranganathan's description of subject from mathematical angle. Ranganathan was highly influenced by Nineteenth Century mathematician George Cantor and he used the concept of infinity in developing an axiomatic interpretation of subject. Majority of library scientists interpreted the concept of subject merely as a term or descriptor or heading to include the same in cataloguing and subject indexing. Some library scientists interpreted subject on the basis of document, i.e. from the angle of the concept of aboutness or epistemological potential of the document etc. Some people explained subject from the viewpoint of social, cultural or socio-cultural process. Attempts were made to describe subject from epistemological viewpoint. But S R Ranganathan was the first to develop an axiomatic concept of subject on its own. He built up an independent idea of subject that is ubiquitously pervasive with human cognition process. To develop the basic foundation of subject, he used the mathematical concepts of infinity and infinitesimal and construed the set of subjects or universe of subjects as continuous infinite universe. The subject may also exist in extremely micro-form, which was termed as spot subject and analogized with point, which is dimensionless having only an existence. The influence of Twentieth Century physicist George Gamow on Ranganathan's thought has also been discussed.
    Source
    Annals of library and information studies. 62(2015) no.4, S.255-264
  2. Satija, M.P.; Madalli, D.P.; Dutta, B.: Modes of growth of subjects (2014) 0.00
    0.0035224345 = product of:
      0.02465704 = sum of:
        0.02465704 = weight(_text_:of in 1383) [ClassicSimilarity], result of:
          0.02465704 = score(doc=1383,freq=24.0), product of:
            0.06866331 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.043909185 = queryNorm
            0.3591007 = fieldWeight in 1383, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=1383)
      0.14285715 = coord(1/7)
    
    Abstract
    We define knowledge as a system in a perpetually dynamic continuum. Knowledge grows as it is always fragmentary, though quantifying this growth is nearly impossible. Growth, inherent in the nature of knowledge, is natural, planned, and induced. S.R. Ranganathan elucidated the various modes of growth of subjects, viz. fission, lamination, loose assemblage, fusion, distillation, partial comprehensions, and subject bundles. The present study adds a few more modes of developments of subjects. We describe and fit these modes of growth in the framework of growth by specialization, interdisciplinary and multidisciplinary growths. We also examine emergence of online domains such as web directories and focus on possible modes of formation of such domains. The paper concludes that new modes may emerge in the future in consonance with the new research trends and ever-changing social needs.
  3. Giunchiglia, F.; Maltese, V.; Dutta, B.: Domains and context : first steps towards managing diversity in knowledge (2011) 0.00
    0.0032818348 = product of:
      0.022972843 = sum of:
        0.022972843 = weight(_text_:of in 603) [ClassicSimilarity], result of:
          0.022972843 = score(doc=603,freq=30.0), product of:
            0.06866331 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.043909185 = queryNorm
            0.33457235 = fieldWeight in 603, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=603)
      0.14285715 = coord(1/7)
    
    Abstract
    Despite the progress made, one of the main barriers towards the use of semantics is the lack of background knowledge. Dealing with this problem has turned out to be a very difficult task because on the one hand the background knowledge should be very large and virtually unbound and, on the other hand, it should be context sensitive and able to capture the diversity of the world, for instance in terms of language and knowledge. Our proposed solution consists in addressing the problem in three steps: (1) create an extensible diversity-aware knowledge base providing a continuously growing quantity of properly organized knowledge; (2) given the problem, build at run-time the proper context within which perform the reasoning; (3) solve the problem. Our work is based on two key ideas. The first is that of using domains, i.e. a general semantic-aware methodology and technique for structuring the background knowledge. The second is that of building the context of reasoning by a suitable combination of domains. Our goal in this paper is to introduce the overall approach, show how it can be applied to an important use case, i.e. the matching of classifications, and describe our first steps towards the construction of a large scale diversity-aware knowledge base.
    Content
    Also in: Journal of Web Semantics, special issue on Reasoning with Context in the Semantic Web, April 2012.
    Imprint
    Trento : University of Trento / Department of Information engineering and Computer Science
  4. Madalli, D.P.; Chatterjee, U.; Dutta, B.: ¬An analytical approach to building a core ontology for food (2017) 0.00
    0.002625468 = product of:
      0.018378275 = sum of:
        0.018378275 = weight(_text_:of in 3362) [ClassicSimilarity], result of:
          0.018378275 = score(doc=3362,freq=30.0), product of:
            0.06866331 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.043909185 = queryNorm
            0.26765788 = fieldWeight in 3362, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=3362)
      0.14285715 = coord(1/7)
    
    Abstract
    Purpose The purpose of this paper is to demonstrate the construction of a core ontology for food. To construct the core ontology, the authors propose here an approach called, yet another methodology for ontology plus (YAMO+). The goal is to exhibit the construction of a core ontology for a domain, which can be further extended and converted into application ontologies. Design/methodology/approach To motivate the construction of the core ontology for food, the authors have first articulated a set of application scenarios. The idea is that the constructed core ontology can be used to build application-specific ontologies for those scenarios. As part of the developmental approach to core ontology, the authors have proposed a methodology called YAMO+. It is designed following the theory of analytico-synthetic classification. YAMO+ is generic in nature and can be applied to build core ontologies for any domain. Findings Construction of a core ontology needs a thorough understanding of the domain and domain requirements. There are various challenges involved in constructing a core ontology as discussed in this paper. The proposed approach has proven to be sturdy enough to face the challenges that the construction of a core ontology poses. It is observed that core ontology is amenable to conversion to an application ontology. Practical implications The constructed core ontology for domain food can be readily used for developing application ontologies related to food. The proposed methodology YAMO+ can be applied to build core ontologies for any domain. Originality/value As per the knowledge, the proposed approach is the first attempt based on the study of the state of the art literature, in terms of, a formal approach to the design of a core ontology. Also, the constructed core ontology for food is the first one as there is no such ontology available on the web for domain food.
    Source
    Journal of documentation. 73(2017) no.1, S.123-144
  5. Giunchiglia, F.; Dutta, B.; Maltese, V.: From knowledge organization to knowledge representation (2014) 0.00
    0.002542098 = product of:
      0.017794685 = sum of:
        0.017794685 = weight(_text_:of in 1369) [ClassicSimilarity], result of:
          0.017794685 = score(doc=1369,freq=18.0), product of:
            0.06866331 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.043909185 = queryNorm
            0.25915858 = fieldWeight in 1369, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1369)
      0.14285715 = coord(1/7)
    
    Abstract
    So far, within the library and information science (LIS) community, knowledge organization (KO) has developed its own very successful solutions to document search, allowing for the classification, indexing and search of millions of books. However, current KO solutions are limited in expressivity as they only support queries by document properties, e.g., by title, author and subject. In parallel, within the artificial intelligence and semantic web communities, knowledge representation (KR) has developed very powerful end expressive techniques, which via the use of ontologies support queries by any entity property (e.g., the properties of the entities described in a document). However, KR has not scaled yet to the level of KO, mainly because of the lack of a precise and scalable entity specification methodology. In this paper we present DERA, a new methodology inspired by the faceted approach, as introduced in KO, that retains all the advantages of KR and compensates for the limitations of KO. DERA guarantees at the same time quality, extensibility, scalability and effectiveness in search.
  6. Adhikari, A.; Dutta, B.; Dutta, A.; Mondal, D.; Singh, S.: ¬An intrinsic information content-based semantic similarity measure considering the disjoint common subsumers of concepts of an ontology (2018) 0.00
    0.0023967132 = product of:
      0.016776992 = sum of:
        0.016776992 = weight(_text_:of in 4372) [ClassicSimilarity], result of:
          0.016776992 = score(doc=4372,freq=16.0), product of:
            0.06866331 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.043909185 = queryNorm
            0.24433708 = fieldWeight in 4372, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4372)
      0.14285715 = coord(1/7)
    
    Abstract
    Finding similarity between concepts based on semantics has become a new trend in many applications (e.g., biomedical informatics, natural language processing). Measuring the Semantic Similarity (SS) with higher accuracy is a challenging task. In this context, the Information Content (IC)-based SS measure has gained popularity over the others. The notion of IC evolves from the science of information theory. Information theory has very high potential to characterize the semantics of concepts. Designing an IC-based SS framework comprises (i) an IC calculator, and (ii) an SS calculator. In this article, we propose a generic intrinsic IC-based SS calculator. We also introduce here a new structural aspect of an ontology called DCS (Disjoint Common Subsumers) that plays a significant role in deciding the similarity between two concepts. We evaluated our proposed similarity calculator with the existing intrinsic IC-based similarity calculators, as well as corpora-dependent similarity calculators using several benchmark data sets. The experimental results show that the proposed similarity calculator produces a high correlation with human evaluation over the existing state-of-the-art IC-based similarity calculators.
    Source
    Journal of the Association for Information Science and Technology. 69(2018) no.8, S.1023-1034
  7. Dutta, B.: Organizing knowledge : then and now (2015) 0.00
    0.0016947321 = product of:
      0.011863125 = sum of:
        0.011863125 = weight(_text_:of in 6634) [ClassicSimilarity], result of:
          0.011863125 = score(doc=6634,freq=2.0), product of:
            0.06866331 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.043909185 = queryNorm
            0.17277241 = fieldWeight in 6634, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=6634)
      0.14285715 = coord(1/7)
    
    Footnote
    Rez. in: Annals of Library and Information Studies 62(2015) no.4, S.301 (A.K. Das)