Search (4331 results, page 2 of 217)

  • × type_ss:"a"
  • × year_i:[2010 TO 2020}
  1. Zhuge, H.; Zhang, J.: Topological centrality and its e-Science applications (2010) 0.04
    0.04150006 = product of:
      0.12450018 = sum of:
        0.083051346 = weight(_text_:applications in 3984) [ClassicSimilarity], result of:
          0.083051346 = score(doc=3984,freq=4.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.4815245 = fieldWeight in 3984, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3984)
        0.0128330635 = weight(_text_:of in 3984) [ClassicSimilarity], result of:
          0.0128330635 = score(doc=3984,freq=6.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.20947541 = fieldWeight in 3984, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3984)
        0.028615767 = weight(_text_:systems in 3984) [ClassicSimilarity], result of:
          0.028615767 = score(doc=3984,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.23767869 = fieldWeight in 3984, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3984)
      0.33333334 = coord(3/9)
    
    Abstract
    Network structure analysis plays an important role in characterizing complex systems. Different from previous network centrality measures, this article proposes the topological centrality measure reflecting the topological positions of nodes and edges as well as influence between nodes and edges in general network. Experiments on different networks show distinguished features of the topological centrality by comparing with the degree centrality, closeness centrality, betweenness centrality, information centrality, and PageRank. The topological centrality measure is then applied to discover communities and to construct the backbone network. Its characteristics and significance is further shown in e-Science applications.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.9, S.1824-1841
  2. Stamou, G.; Chortaras, A.: Ontological query answering over semantic data (2017) 0.04
    0.040611103 = product of:
      0.12183331 = sum of:
        0.06711562 = weight(_text_:applications in 3926) [ClassicSimilarity], result of:
          0.06711562 = score(doc=3926,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.38913056 = fieldWeight in 3926, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0625 = fieldNorm(doc=3926)
        0.008467626 = weight(_text_:of in 3926) [ClassicSimilarity], result of:
          0.008467626 = score(doc=3926,freq=2.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.13821793 = fieldWeight in 3926, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=3926)
        0.046250064 = weight(_text_:systems in 3926) [ClassicSimilarity], result of:
          0.046250064 = score(doc=3926,freq=4.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.38414678 = fieldWeight in 3926, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0625 = fieldNorm(doc=3926)
      0.33333334 = coord(3/9)
    
    Abstract
    Modern information retrieval systems advance user experience on the basis of concept-based rather than keyword-based query answering.
    Series
    Lecture Notes in Computer Scienc;10370) (Information Systems and Applications, incl. Internet/Web, and HCI
  3. Poulter, A.: Open source in libraries : an introduction and overview (2010) 0.04
    0.040410094 = product of:
      0.121230274 = sum of:
        0.05872617 = weight(_text_:applications in 4542) [ClassicSimilarity], result of:
          0.05872617 = score(doc=4542,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.34048924 = fieldWeight in 4542, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4542)
        0.014818345 = weight(_text_:of in 4542) [ClassicSimilarity], result of:
          0.014818345 = score(doc=4542,freq=8.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.24188137 = fieldWeight in 4542, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4542)
        0.047685754 = weight(_text_:software in 4542) [ClassicSimilarity], result of:
          0.047685754 = score(doc=4542,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.30681872 = fieldWeight in 4542, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4542)
      0.33333334 = coord(3/9)
    
    Abstract
    Purpose - The purpose of this paper is to introduce the concept of open source to a non-technical audience and give an overview of its current and potential applications in libraries. Design/methodology/approach - The paper is based on a literature review. Findings - Open source already aids libraries and has great potential but is hobbled by its intrinsically technical appeal. Originality/value - The paper makes observations about information technology trends which might affect the take up of open source and introduces open source comprehensively but succinctly.
    Footnote
    Einführung zu einem Themenheft "Open source software apps in libraries"
  4. Rajabi, E.; Sanchez-Alonso, S.; Sicilia, M.-A.: Analyzing broken links on the web of data : An experiment with DBpedia (2014) 0.04
    0.040410094 = product of:
      0.121230274 = sum of:
        0.05872617 = weight(_text_:applications in 1330) [ClassicSimilarity], result of:
          0.05872617 = score(doc=1330,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.34048924 = fieldWeight in 1330, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1330)
        0.014818345 = weight(_text_:of in 1330) [ClassicSimilarity], result of:
          0.014818345 = score(doc=1330,freq=8.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.24188137 = fieldWeight in 1330, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1330)
        0.047685754 = weight(_text_:software in 1330) [ClassicSimilarity], result of:
          0.047685754 = score(doc=1330,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.30681872 = fieldWeight in 1330, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1330)
      0.33333334 = coord(3/9)
    
    Abstract
    Linked open data allow interlinking and integrating any kind of data on the web. Links between various data sources play a key role insofar as they allow software applications (e.g., browsers, search engines) to operate over the aggregated data space as if it was a unique local database. In this new data space, where DBpedia, a data set including structured information from Wikipedia, seems to be the central hub, we analyzed and highlighted outgoing links from this hub in an effort to discover broken links. The paper reports on an experiment to examine the causes of broken links and proposes some treatments for solving this problem.
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.8, S.1721-1727
  5. Dunsire, G.: Enhancing information services using machine-to-machine terminology services (2011) 0.04
    0.040410094 = product of:
      0.121230274 = sum of:
        0.05872617 = weight(_text_:applications in 1805) [ClassicSimilarity], result of:
          0.05872617 = score(doc=1805,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.34048924 = fieldWeight in 1805, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1805)
        0.014818345 = weight(_text_:of in 1805) [ClassicSimilarity], result of:
          0.014818345 = score(doc=1805,freq=8.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.24188137 = fieldWeight in 1805, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1805)
        0.047685754 = weight(_text_:software in 1805) [ClassicSimilarity], result of:
          0.047685754 = score(doc=1805,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.30681872 = fieldWeight in 1805, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1805)
      0.33333334 = coord(3/9)
    
    Abstract
    This paper describes the basic concepts of terminology services and their role in information retrieval interfaces. Terminology services are consumed by other software applications using machine-to-machine protocols, rather than directly by end-users. An example of a terminology service is the pilot developed by the High Level Thesaurus (HILT) project which has successfully demonstrated its potential for enhancing subject retrieval in operational services. Examples of enhancements in three such services are given. The paper discusses the future development of terminology services in relation to the Semantic Web.
  6. Stoykova, V.; Petkova, E.: Automatic extraction of mathematical terms for precalculus (2012) 0.04
    0.039748333 = product of:
      0.11924499 = sum of:
        0.05872617 = weight(_text_:applications in 156) [ClassicSimilarity], result of:
          0.05872617 = score(doc=156,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.34048924 = fieldWeight in 156, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0546875 = fieldNorm(doc=156)
        0.0128330635 = weight(_text_:of in 156) [ClassicSimilarity], result of:
          0.0128330635 = score(doc=156,freq=6.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.20947541 = fieldWeight in 156, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=156)
        0.047685754 = weight(_text_:software in 156) [ClassicSimilarity], result of:
          0.047685754 = score(doc=156,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.30681872 = fieldWeight in 156, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0546875 = fieldNorm(doc=156)
      0.33333334 = coord(3/9)
    
    Abstract
    In this work, we present the results of research for evaluating a methodology for extracting mathematical terms for precalculus using the techniques for semantically-oriented statistical search. We use the corpus-based approach and the combination of different statistically-based techniques for extracting keywords, collocations and co-occurrences incorporated in the Sketch Engine software. We evaluate the collocations candidate terms for the basic concept function(s) and approve the related methodology by precalculus domain conceptual terms definitions. Finally, we offer a conceptual terms hierarchical representation and discuss the results with respect to their possible applications.
  7. Schreiber, G.: Issues in publishing and aligning Web vocabularies (2011) 0.04
    0.037935585 = product of:
      0.113806754 = sum of:
        0.059322387 = weight(_text_:applications in 4809) [ClassicSimilarity], result of:
          0.059322387 = score(doc=4809,freq=4.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.34394607 = fieldWeight in 4809, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4809)
        0.019081537 = weight(_text_:of in 4809) [ClassicSimilarity], result of:
          0.019081537 = score(doc=4809,freq=26.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.31146988 = fieldWeight in 4809, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4809)
        0.03540283 = weight(_text_:systems in 4809) [ClassicSimilarity], result of:
          0.03540283 = score(doc=4809,freq=6.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.29405114 = fieldWeight in 4809, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4809)
      0.33333334 = coord(3/9)
    
    Abstract
    Knowledge organization systems (KOS), such as vocabularies, thesauri and subject headings, contain a wealth of knowledge, collected by dedicated experts over long periods of time. these knowledge sources are potentially of high value to Web applications. To make this possible we need methods to publish these systems and subsequently clarify their relationships, also called "alignments'. In this talk Guus discusses methodological issues in publishing and aligning classification systems on the Web. With regards to publication of Web vocabularies he explains the basic principles for building a SKOS version of a vocabulary and illustrates this with examples. In particular, he discusses how one should prevent information loss, i.e. constructing a SKOS version that contains all information contained in the original vocabulary model. The talk also examines the role of RDF and OWL in this process. Web vocabularies derive much of their added value from the links they can provide to other vocabularies. He explains the process of vocabulary alignment, including the choice of alignment technique. Particular attention is paid to an evaluation of the process: how can one assess the quality of the resulting alignment? Human evaluators often play an important role in this process. Guus concludes by showing some examples of how aligned Web vocabularies can be used to create added value to applications.
    Source
    Classification and ontology: formal approaches and access to knowledge: proceedings of the International UDC Seminar, 19-20 September 2011, The Hague, The Netherlands. Eds.: A. Slavic u. E. Civallero
  8. Ohly, H.P.: Sociological aspects of knowledge and knowledge organization (2014) 0.04
    0.037558425 = product of:
      0.08450646 = sum of:
        0.016735615 = weight(_text_:of in 1402) [ClassicSimilarity], result of:
          0.016735615 = score(doc=1402,freq=20.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.27317715 = fieldWeight in 1402, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1402)
        0.020439833 = weight(_text_:systems in 1402) [ClassicSimilarity], result of:
          0.020439833 = score(doc=1402,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.1697705 = fieldWeight in 1402, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1402)
        0.034061253 = weight(_text_:software in 1402) [ClassicSimilarity], result of:
          0.034061253 = score(doc=1402,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.21915624 = fieldWeight in 1402, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1402)
        0.013269759 = product of:
          0.026539518 = sum of:
            0.026539518 = weight(_text_:22 in 1402) [ClassicSimilarity], result of:
              0.026539518 = score(doc=1402,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.19345059 = fieldWeight in 1402, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1402)
          0.5 = coord(1/2)
      0.44444445 = coord(4/9)
    
    Abstract
    Since the middle of the last century knowledge organization, the development of scientific concepts and arrangements, has been seen as a cognitivistic (or rationalistic) problem and thus as universal and logical (cf. Turing 1950). Older approaches accordingly see areas of knowledge as naturally given and organically grown. The knowledge must only be detected and logically arranged. At latest with the constructivism a 'turn' has entered, which sees knowledge organization as a social convention and accordingly regards universal standards skeptical. Simultaneously in the sciences came up a stronger concern with historical, empirical and sociological studies of its foundations and in philosophy of science the return to different kinds of relativizations has gained more importance. With the challenge of self-organizing ordering systems by social software a new crisis comes up for knowledge organization. The future might be a combination of logical descriptions, specialized evaluation, and accompanying user-driven principles. In this paper, several classical sociological positions are discussed, conclusions are drawn for knowledge and information as well as for science and for knowledge organization and objections and prospects are designated.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  9. Almeida, M.B.; Farinelli, F.: Ontologies for the representation of electronic medical records : the obstetric and neonatal ontology (2017) 0.04
    0.037153613 = product of:
      0.111460835 = sum of:
        0.059322387 = weight(_text_:applications in 3918) [ClassicSimilarity], result of:
          0.059322387 = score(doc=3918,freq=4.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.34394607 = fieldWeight in 3918, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3918)
        0.016735615 = weight(_text_:of in 3918) [ClassicSimilarity], result of:
          0.016735615 = score(doc=3918,freq=20.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.27317715 = fieldWeight in 3918, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3918)
        0.03540283 = weight(_text_:systems in 3918) [ClassicSimilarity], result of:
          0.03540283 = score(doc=3918,freq=6.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.29405114 = fieldWeight in 3918, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3918)
      0.33333334 = coord(3/9)
    
    Abstract
    Ontology is an interdisciplinary field that involves both the use of philosophical principles and the development of computational artifacts. As artifacts, ontologies can have diverse applications in knowledge management, information retrieval, and information systems, to mention a few. They have been largely applied to organize information in complex fields like Biomedicine. In this article, we present the OntoNeo Ontology, an initiative to build a formal ontology in the obstetrics and neonatal domain. OntoNeo is a resource that has been designed to serve as a comprehensive infrastructure providing scientific research and healthcare professionals with access to relevant information. The goal of OntoNeo is twofold: (a) to organize specialized medical knowledge, and (b) to provide a potential consensual representation of the medical information found in electronic health records and medical information systems. To describe our initiative, we first provide background information about distinct theories underlying ontology, top-level computational ontologies and their applications in Biomedicine. Then, we present the methodology employed in the development of OntoNeo and the results obtained to date. Finally, we discuss the applicability of OntoNeo by presenting a proof of concept that illustrates its potential usefulness in the realm of healthcare information systems.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.11, S.2529-2542
  10. Greiner-Petter, A.; Schubotz, M.; Cohl, H.S.; Gipp, B.: Semantic preserving bijective mappings for expressions involving special functions between computer algebra systems and document preparation systems (2019) 0.04
    0.03700888 = product of:
      0.08326998 = sum of:
        0.012701438 = weight(_text_:of in 5499) [ClassicSimilarity], result of:
          0.012701438 = score(doc=5499,freq=18.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.20732687 = fieldWeight in 5499, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=5499)
        0.03270373 = weight(_text_:systems in 5499) [ClassicSimilarity], result of:
          0.03270373 = score(doc=5499,freq=8.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.2716328 = fieldWeight in 5499, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03125 = fieldNorm(doc=5499)
        0.027249003 = weight(_text_:software in 5499) [ClassicSimilarity], result of:
          0.027249003 = score(doc=5499,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.17532499 = fieldWeight in 5499, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03125 = fieldNorm(doc=5499)
        0.010615807 = product of:
          0.021231614 = sum of:
            0.021231614 = weight(_text_:22 in 5499) [ClassicSimilarity], result of:
              0.021231614 = score(doc=5499,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.15476047 = fieldWeight in 5499, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5499)
          0.5 = coord(1/2)
      0.44444445 = coord(4/9)
    
    Abstract
    Purpose Modern mathematicians and scientists of math-related disciplines often use Document Preparation Systems (DPS) to write and Computer Algebra Systems (CAS) to calculate mathematical expressions. Usually, they translate the expressions manually between DPS and CAS. This process is time-consuming and error-prone. The purpose of this paper is to automate this translation. This paper uses Maple and Mathematica as the CAS, and LaTeX as the DPS. Design/methodology/approach Bruce Miller at the National Institute of Standards and Technology (NIST) developed a collection of special LaTeX macros that create links from mathematical symbols to their definitions in the NIST Digital Library of Mathematical Functions (DLMF). The authors are using these macros to perform rule-based translations between the formulae in the DLMF and CAS. Moreover, the authors develop software to ease the creation of new rules and to discover inconsistencies. Findings The authors created 396 mappings and translated 58.8 percent of DLMF formulae (2,405 expressions) successfully between Maple and DLMF. For a significant percentage, the special function definitions in Maple and the DLMF were different. An atomic symbol in one system maps to a composite expression in the other system. The translator was also successfully used for automatic verification of mathematical online compendia and CAS. The evaluation techniques discovered two errors in the DLMF and one defect in Maple. Originality/value This paper introduces the first translation tool for special functions between LaTeX and CAS. The approach improves error-prone manual translations and can be used to verify mathematical online compendia and CAS.
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 71(2019) no.3, S.415-439
  11. Lumsden, J.; Hall, H.; Cruickshank, P.: Ontology definition and construction, and epistemological adequacy for systems interoperability : a practitioner analysis (2011) 0.04
    0.036522295 = product of:
      0.10956688 = sum of:
        0.041947264 = weight(_text_:applications in 4801) [ClassicSimilarity], result of:
          0.041947264 = score(doc=4801,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2432066 = fieldWeight in 4801, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4801)
        0.017552461 = weight(_text_:of in 4801) [ClassicSimilarity], result of:
          0.017552461 = score(doc=4801,freq=22.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.28651062 = fieldWeight in 4801, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4801)
        0.050067157 = weight(_text_:systems in 4801) [ClassicSimilarity], result of:
          0.050067157 = score(doc=4801,freq=12.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.41585106 = fieldWeight in 4801, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4801)
      0.33333334 = coord(3/9)
    
    Abstract
    Ontology development is considered to be a useful approach to the design and implementation of interoperable systems. This literature review and commentary examines the current state of knowledge in this field with particular reference to processes involved in assuring epistemological adequacy. It takes the perspective of the information systems practitioner keen to adopt a systematic approach to in-house ontology design, taking into consideration previously published work. The study arises from author involvement in an integration/interoperability project on systems that support Scottish Common Housing Registers in which, ultimately, ontological modelling was not deployed. Issues concerning the agreement of meaning, and the implications for the creation of interoperable systems, are discussed. The extent to which those theories, methods and frameworks provide practitioners with a usable set of tools is explored, and examples of practical applications of ontological modelling are noted. The findings from the review of the literature demonstrate a number of difficulties faced by information systems practitioners keen to develop and deploy domain ontologies. A major problem is deciding which broad approach to take: to rely on automatic ontology construction techniques, or to rely on key words and domain experts to develop ontologies.
    Source
    Journal of information science. xx(2011), no.x, S.1-9
  12. Mahesh, K.; Karanth, P.: ¬A novel knowledge organization scheme for the Web : superlinks with semantic roles (2012) 0.04
    0.036420148 = product of:
      0.10926044 = sum of:
        0.059322387 = weight(_text_:applications in 822) [ClassicSimilarity], result of:
          0.059322387 = score(doc=822,freq=4.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.34394607 = fieldWeight in 822, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0390625 = fieldNorm(doc=822)
        0.015876798 = weight(_text_:of in 822) [ClassicSimilarity], result of:
          0.015876798 = score(doc=822,freq=18.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.25915858 = fieldWeight in 822, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=822)
        0.034061253 = weight(_text_:software in 822) [ClassicSimilarity], result of:
          0.034061253 = score(doc=822,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.21915624 = fieldWeight in 822, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=822)
      0.33333334 = coord(3/9)
    
    Abstract
    We discuss the needs of a knowledge organization scheme for supporting Web-based software applications. We show how it differs from traditional knowledge organization schemes due to the virtual, dynamic, ad-hoc, userspecific and application-specific nature of Web-based knowledge. The sheer size of Web resources also adds to the complexity of organizing knowledge on the Web. As such, a standard, global scheme such as a single ontology for classifying and organizing all Web-based content is unrealistic. There is nevertheless a strong and immediate need for effective knowledge organization schemes to improve the efficiency and effectiveness of Web-based applications. In this context, we propose a novel knowledge organization scheme wherein concepts in the ontology of a domain are semantically interlinked with specific pieces of Web-based content using a rich hyper-linking structure known as Superlinks with well-defined semantic roles. We illustrate how such a knowledge organization scheme improves the efficiency and effectiveness of a Web-based e-commerce retail store.
    Source
    Categories, contexts and relations in knowledge organization: Proceedings of the Twelfth International ISKO Conference 6-9 August 2012, Mysore, India. Eds.: Neelameghan, A. u. K.S. Raghavan
  13. Muresan, S.; Klavans, J.L.: Inducing terminologies from text : a case study for the consumer health domain (2013) 0.04
    0.0361387 = product of:
      0.108416095 = sum of:
        0.07118686 = weight(_text_:applications in 682) [ClassicSimilarity], result of:
          0.07118686 = score(doc=682,freq=4.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.41273528 = fieldWeight in 682, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.046875 = fieldNorm(doc=682)
        0.012701439 = weight(_text_:of in 682) [ClassicSimilarity], result of:
          0.012701439 = score(doc=682,freq=8.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.20732689 = fieldWeight in 682, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=682)
        0.0245278 = weight(_text_:systems in 682) [ClassicSimilarity], result of:
          0.0245278 = score(doc=682,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.2037246 = fieldWeight in 682, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=682)
      0.33333334 = coord(3/9)
    
    Abstract
    Specialized medical ontologies and terminologies, such as SNOMED CT and the Unified Medical Language System (UMLS), have been successfully leveraged in medical information systems to provide a standard web-accessible medium for interoperability, access, and reuse. However, these clinically oriented terminologies and ontologies cannot provide sufficient support when integrated into consumer-oriented applications, because these applications must "understand" both technical and lay vocabulary. The latter is not part of these specialized terminologies and ontologies. In this article, we propose a two-step approach for building consumer health terminologies from text: 1) automatic extraction of definitions from consumer-oriented articles and web documents, which reflects language in use, rather than relying solely on dictionaries, and 2) learning to map definitions expressed in natural language to terminological knowledge by inducing a syntactic-semantic grammar rather than using hand-written patterns or grammars. We present quantitative and qualitative evaluations of our two-step approach, which show that our framework could be used to induce consumer health terminologies from text.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.4, S.727-744
  14. Brunswicker, S.; Jensen, B.; Song, Z.; Majchrzak, A.: Transparency as design choice of open data contests (2018) 0.04
    0.03611748 = product of:
      0.10835243 = sum of:
        0.059322387 = weight(_text_:applications in 4464) [ClassicSimilarity], result of:
          0.059322387 = score(doc=4464,freq=4.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.34394607 = fieldWeight in 4464, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4464)
        0.014968789 = weight(_text_:of in 4464) [ClassicSimilarity], result of:
          0.014968789 = score(doc=4464,freq=16.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.24433708 = fieldWeight in 4464, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4464)
        0.034061253 = weight(_text_:software in 4464) [ClassicSimilarity], result of:
          0.034061253 = score(doc=4464,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.21915624 = fieldWeight in 4464, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4464)
      0.33333334 = coord(3/9)
    
    Abstract
    Open data contests have become popular virtual events that motivate civic hackers to design high performing software applications that are useful and useable for citizens. However, such contests stir up controversy among scholars and practitioners about the role of transparency, or more specifically, the unrestricted access and observability of the applications submitted throughout the contest. In one view, transparency may reduce performance because it causes excessive replication, whereas another view argues that transparency can encourage novel forms of reuse, namely recombination. This article proposes a new perspective towards transparency as a design choice in open data contest architectures. We introduce a 2-dimensional view towards transparency, defined as observability of information about each submitted (a) solution (how it works) and its (b) performance (how high it scores). We design a sociotechnical contest architecture that jointly affords both transparency dimensions, and evaluate it in the field during a 21-day contest involving 28 participants. The results suggest that the joint instantiation of both transparency dimensions increases performance by triggering different kinds of recombination. Findings advance literature on sociotechnical architectures for civic design. Furthermore, they guide practitioners in implementing open data contests and balancing the tension between individual versus collective benefits.
    Source
    Journal of the Association for Information Science and Technology. 69(2018) no.10, S.1205-1222
  15. Beak, J.; Smiraglia, R.P.: Contours of knowledge : core and granularity in the evolution of the DCMI domain (2014) 0.03
    0.034637667 = product of:
      0.103912994 = sum of:
        0.07118686 = weight(_text_:applications in 1415) [ClassicSimilarity], result of:
          0.07118686 = score(doc=1415,freq=4.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.41273528 = fieldWeight in 1415, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.046875 = fieldNorm(doc=1415)
        0.016802425 = weight(_text_:of in 1415) [ClassicSimilarity], result of:
          0.016802425 = score(doc=1415,freq=14.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2742677 = fieldWeight in 1415, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=1415)
        0.015923709 = product of:
          0.031847417 = sum of:
            0.031847417 = weight(_text_:22 in 1415) [ClassicSimilarity], result of:
              0.031847417 = score(doc=1415,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.23214069 = fieldWeight in 1415, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1415)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    Domain analysis reveals the contours of knowledge in diverse discourse communities. The Dublin Core Metadata Initiative (DCMI) conferences represent the cutting edge of research in metadata for the digital age. Beak and Smiraglia (2013) discovered a shared epistemology revealed by co-citation perceptions of the domain, a common ontological base, social semantics, and a limited but focused intent. User groups did not emerge from that analysis, raising an interesting question about the content of core thematic extension versus a highly granular intension. We analyzed keywords from the titles by year to identify core and granular topics as they arose over time. The results showed that only 36 core keywords, e.g. "Dublin Core," "Metadata," "Linked Data," "Applications," etc. represents the domain's extension. However, there was much rich terminology among the granularity, e.g., "development," "description," "interoperability," "analysis," "applications," and "classification" and even "domain" pointed to the domain's intension.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  16. Shen, M.; Liu, D.-R.; Huang, Y.-S.: Extracting semantic relations to enrich domain ontologies (2012) 0.03
    0.034636453 = product of:
      0.10390935 = sum of:
        0.05872617 = weight(_text_:applications in 267) [ClassicSimilarity], result of:
          0.05872617 = score(doc=267,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.34048924 = fieldWeight in 267, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0546875 = fieldNorm(doc=267)
        0.016567415 = weight(_text_:of in 267) [ClassicSimilarity], result of:
          0.016567415 = score(doc=267,freq=10.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2704316 = fieldWeight in 267, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=267)
        0.028615767 = weight(_text_:systems in 267) [ClassicSimilarity], result of:
          0.028615767 = score(doc=267,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.23767869 = fieldWeight in 267, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0546875 = fieldNorm(doc=267)
      0.33333334 = coord(3/9)
    
    Abstract
    Domain ontologies facilitate the organization, sharing and reuse of domain knowledge, and enable various vertical domain applications to operate successfully. Most methods for automatically constructing ontologies focus on taxonomic relations, such as is-kind-of and is- part-of relations. However, much of the domain-specific semantics is ignored. This work proposes a semi-unsupervised approach for extracting semantic relations from domain-specific text documents. The approach effectively utilizes text mining and existing taxonomic relations in domain ontologies to discover candidate keywords that can represent semantic relations. A preliminary experiment on the natural science domain (Taiwan K9 education) indicates that the proposed method yields valuable recommendations. This work enriches domain ontologies by adding distilled semantics.
    Source
    Journal of Intelligent Information Systems
  17. Bénauda, C.-L.; Bordeianu, S.: OCLC's WorldShare Management Services : a brave new world for catalogers (2015) 0.03
    0.034636453 = product of:
      0.10390935 = sum of:
        0.05872617 = weight(_text_:applications in 2617) [ClassicSimilarity], result of:
          0.05872617 = score(doc=2617,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.34048924 = fieldWeight in 2617, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2617)
        0.016567415 = weight(_text_:of in 2617) [ClassicSimilarity], result of:
          0.016567415 = score(doc=2617,freq=10.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2704316 = fieldWeight in 2617, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2617)
        0.028615767 = weight(_text_:systems in 2617) [ClassicSimilarity], result of:
          0.028615767 = score(doc=2617,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.23767869 = fieldWeight in 2617, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2617)
      0.33333334 = coord(3/9)
    
    Abstract
    Like other recent library management systems, OCLC's WorldShare Management Services (WMS) is cloud-based. But unlike the others, WMS opens WorldCat for applications beyond its traditional role as a source of bibliographic records. It enables catalogers to work directly from the Master Record, which no longer needs to be exported to a local system. This article describes the impact of WMS on the roles and functions of cataloging departments, and asks if it is changing the meaning of cataloging. It concludes that while the workflows are changed dramatically, the profession of cataloging remains relevant.
  18. Kumar B.L., V.; Nikam, K.: Development of an information support system for yogic science using knowledge organization systems (2014) 0.03
    0.034324303 = product of:
      0.10297291 = sum of:
        0.014818345 = weight(_text_:of in 1391) [ClassicSimilarity], result of:
          0.014818345 = score(doc=1391,freq=8.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.24188137 = fieldWeight in 1391, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1391)
        0.04046881 = weight(_text_:systems in 1391) [ClassicSimilarity], result of:
          0.04046881 = score(doc=1391,freq=4.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.33612844 = fieldWeight in 1391, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1391)
        0.047685754 = weight(_text_:software in 1391) [ClassicSimilarity], result of:
          0.047685754 = score(doc=1391,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.30681872 = fieldWeight in 1391, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1391)
      0.33333334 = coord(3/9)
    
    Abstract
    This paper deals with the design and development of an information support system for yogic science using specially designed knowledge organization systems such as a yoga glossary and yogic thesaurus. A machine-readable Sanskrit-English bilingual glossary, thesaurus for yogic science is developed using Greenstone Digital Lib ary software, and also there is a web portal for the yogic science community, which includes a list of all major yoga institutes, research centers, libraries, glossaries, thesauri, yoga subject term visualization maps, Google groups, forums, online digital repositories, and online public access catalogs related to the discipline of yoga.
  19. Herre, H.: General Formal Ontology (GFO) : a foundational ontology for conceptual modelling (2010) 0.03
    0.034320474 = product of:
      0.10296142 = sum of:
        0.04745791 = weight(_text_:applications in 771) [ClassicSimilarity], result of:
          0.04745791 = score(doc=771,freq=4.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.27515686 = fieldWeight in 771, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03125 = fieldNorm(doc=771)
        0.02279978 = weight(_text_:of in 771) [ClassicSimilarity], result of:
          0.02279978 = score(doc=771,freq=58.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.37216315 = fieldWeight in 771, product of:
              7.615773 = tf(freq=58.0), with freq of:
                58.0 = termFreq=58.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=771)
        0.03270373 = weight(_text_:systems in 771) [ClassicSimilarity], result of:
          0.03270373 = score(doc=771,freq=8.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.2716328 = fieldWeight in 771, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03125 = fieldNorm(doc=771)
      0.33333334 = coord(3/9)
    
    Abstract
    Research in ontology has in recent years become widespread in the field of information systems, in distinct areas of sciences, in business, in economy, and in industry. The importance of ontologies is increasingly recognized in fields diverse as in e-commerce, semantic web, enterprise, information integration, qualitative modelling of physical systems, natural language processing, knowledge engineering, and databases. Ontologies provide formal specifications and harmonized definitions of concepts used to represent knowledge of specific domains. An ontology supplies a unifying framework for communication and establishes the basis of the knowledge about a specific domain. The term ontology has two meanings, it denotes, on the one hand, a research area, on the other hand, a system of organized knowledge. A system of knowledge may exhibit various degrees of formality; in the strongest sense it is an axiomatized and formally represented theory. which is denoted throughout this paper by the term axiomatized ontology. We use the term formal ontology to name an area of research which is becoming a science similar as formal or mathematical logic. Formal ontology is an evolving science which is concerned with the systematic development of axiomatic theories describing forms, modes, and views of being of the world at different levels of abstraction and granularity. Formal ontology combines the methods of mathematical logic with principles of philosophy, but also with the methods of artificial intelligence and linguistics. At themost general level of abstraction, formal ontology is concerned with those categories that apply to every area of the world. The application of formal ontology to domains at different levels of generality yields knowledge systems which are called, according to the level of abstraction, Top Level Ontologies or Foundational Ontologies, Core Domain or Domain Ontologies. Top level or foundational ontologies apply to every area of the world, in contrast to the various Generic, Domain Core or Domain Ontologies, which are associated to more restricted fields of interest. A foundational ontology can serve as a unifying framework for representation and integration of knowledge and may support the communication and harmonisation of conceptual systems. The current paper presents an overview about the current stage of the foundational ontology GFO.
    Source
    Theory and applications of ontology: vol.2: computer applications. Eds.: R. Poli et al
  20. De Maio, C.; Fenza, G.; Loia, V.; Senatore, S.: Hierarchical web resources retrieval by exploiting Fuzzy Formal Concept Analysis (2012) 0.03
    0.03352676 = product of:
      0.100580275 = sum of:
        0.050336715 = weight(_text_:applications in 2737) [ClassicSimilarity], result of:
          0.050336715 = score(doc=2737,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2918479 = fieldWeight in 2737, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.046875 = fieldNorm(doc=2737)
        0.015556021 = weight(_text_:of in 2737) [ClassicSimilarity], result of:
          0.015556021 = score(doc=2737,freq=12.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.25392252 = fieldWeight in 2737, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2737)
        0.034687545 = weight(_text_:systems in 2737) [ClassicSimilarity], result of:
          0.034687545 = score(doc=2737,freq=4.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.28811008 = fieldWeight in 2737, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=2737)
      0.33333334 = coord(3/9)
    
    Abstract
    In recent years, knowledge structuring is assuming important roles in several real world applications such as decision support, cooperative problem solving, e-commerce, Semantic Web and, even in planning systems. Ontologies play an important role in supporting automated processes to access information and are at the core of new strategies for the development of knowledge-based systems. Yet, developing an ontology is a time-consuming task which often needs an accurate domain expertise to tackle structural and logical difficulties in the definition of concepts as well as conceivable relationships. This work presents an ontology-based retrieval approach, that supports data organization and visualization and provides a friendly navigation model. It exploits the fuzzy extension of the Formal Concept Analysis theory to elicit conceptualizations from datasets and generate a hierarchy-based representation of extracted knowledge. An intuitive graphical interface provides a multi-facets view of the built ontology. Through a transparent query-based retrieval, final users navigate across concepts, relations and population.

Languages

  • e 4005
  • d 309
  • i 6
  • f 2
  • a 1
  • el 1
  • es 1
  • sp 1
  • More… Less…

Types

  • el 256
  • b 5
  • s 1
  • x 1
  • More… Less…

Themes

Classifications