Search (84 results, page 1 of 5)

  • × theme_ss:"Wissensrepräsentation"
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.14
    0.14483719 = product of:
      0.28967437 = sum of:
        0.07241859 = product of:
          0.21725577 = sum of:
            0.21725577 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.21725577 = score(doc=400,freq=2.0), product of:
                0.3865637 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.045596033 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.21725577 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.21725577 = score(doc=400,freq=2.0), product of:
            0.3865637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.045596033 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.5 = coord(2/4)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  2. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.13
    0.12655488 = product of:
      0.25310975 = sum of:
        0.048279062 = product of:
          0.14483719 = sum of:
            0.14483719 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.14483719 = score(doc=5820,freq=2.0), product of:
                0.3865637 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.045596033 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.2048307 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.2048307 = score(doc=5820,freq=4.0), product of:
            0.3865637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.045596033 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.5 = coord(2/4)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  3. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.10
    0.096558124 = product of:
      0.19311625 = sum of:
        0.048279062 = product of:
          0.14483719 = sum of:
            0.14483719 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.14483719 = score(doc=701,freq=2.0), product of:
                0.3865637 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.045596033 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.14483719 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.14483719 = score(doc=701,freq=2.0), product of:
            0.3865637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.045596033 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.5 = coord(2/4)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  4. Tudhope, D.; Hodge, G.: Terminology registries (2007) 0.07
    0.06500914 = product of:
      0.26003656 = sum of:
        0.26003656 = sum of:
          0.19826023 = weight(_text_:terminology in 539) [ClassicSimilarity], result of:
            0.19826023 = score(doc=539,freq=4.0), product of:
              0.24053115 = queryWeight, product of:
                5.2752647 = idf(docFreq=614, maxDocs=44218)
                0.045596033 = queryNorm
              0.8242601 = fieldWeight in 539, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.2752647 = idf(docFreq=614, maxDocs=44218)
                0.078125 = fieldNorm(doc=539)
          0.06177633 = weight(_text_:22 in 539) [ClassicSimilarity], result of:
            0.06177633 = score(doc=539,freq=2.0), product of:
              0.15966953 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045596033 = queryNorm
              0.38690117 = fieldWeight in 539, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=539)
      0.25 = coord(1/4)
    
    Abstract
    A discussion on current initiatives regarding terminology registries.
    Date
    26.12.2011 13:22:07
  5. Zeng, M.L.; Fan, W.; Lin, X.: SKOS for an integrated vocabulary structure (2008) 0.04
    0.042252365 = product of:
      0.08450473 = sum of:
        0.06703175 = weight(_text_:headings in 2654) [ClassicSimilarity], result of:
          0.06703175 = score(doc=2654,freq=4.0), product of:
            0.22113821 = queryWeight, product of:
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.045596033 = queryNorm
            0.3031215 = fieldWeight in 2654, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.03125 = fieldNorm(doc=2654)
        0.017472984 = product of:
          0.03494597 = sum of:
            0.03494597 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
              0.03494597 = score(doc=2654,freq=4.0), product of:
                0.15966953 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045596033 = queryNorm
                0.21886435 = fieldWeight in 2654, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2654)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In order to transfer the Chinese Classified Thesaurus (CCT) into a machine-processable format and provide CCT-based Web services, a pilot study has been conducted in which a variety of selected CCT classes and mapped thesaurus entries are encoded with SKOS. OWL and RDFS are also used to encode the same contents for the purposes of feasibility and cost-benefit comparison. CCT is a collected effort led by the National Library of China. It is an integration of the national standards Chinese Library Classification (CLC) 4th edition and Chinese Thesaurus (CT). As a manually created mapping product, CCT provides for each of the classes the corresponding thesaurus terms, and vice versa. The coverage of CCT includes four major clusters: philosophy, social sciences and humanities, natural sciences and technologies, and general works. There are 22 main-classes, 52,992 sub-classes and divisions, 110,837 preferred thesaurus terms, 35,690 entry terms (non-preferred terms), and 59,738 pre-coordinated headings (Chinese Classified Thesaurus, 2005) Major challenges of encoding this large vocabulary comes from its integrated structure. CCT is a result of the combination of two structures (illustrated in Figure 1): a thesaurus that uses ISO-2788 standardized structure and a classification scheme that is basically enumerative, but provides some flexibility for several kinds of synthetic mechanisms Other challenges include the complex relationships caused by differences of granularities of two original schemes and their presentation with various levels of SKOS elements; as well as the diverse coordination of entries due to the use of auxiliary tables and pre-coordinated headings derived from combining classes, subdivisions, and thesaurus terms, which do not correspond to existing unique identifiers. The poster reports the progress, shares the sample SKOS entries, and summarizes problems identified during the SKOS encoding process. Although OWL Lite and OWL Full provide richer expressiveness, the cost-benefit issues and the final purposes of encoding CCT raise questions of using such approaches.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  6. Bittner, T.; Donnelly, M.; Winter, S.: Ontology and semantic interoperability (2006) 0.04
    0.039005484 = product of:
      0.15602194 = sum of:
        0.15602194 = sum of:
          0.11895614 = weight(_text_:terminology in 4820) [ClassicSimilarity], result of:
            0.11895614 = score(doc=4820,freq=4.0), product of:
              0.24053115 = queryWeight, product of:
                5.2752647 = idf(docFreq=614, maxDocs=44218)
                0.045596033 = queryNorm
              0.49455607 = fieldWeight in 4820, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.2752647 = idf(docFreq=614, maxDocs=44218)
                0.046875 = fieldNorm(doc=4820)
          0.037065797 = weight(_text_:22 in 4820) [ClassicSimilarity], result of:
            0.037065797 = score(doc=4820,freq=2.0), product of:
              0.15966953 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045596033 = queryNorm
              0.23214069 = fieldWeight in 4820, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4820)
      0.25 = coord(1/4)
    
    Abstract
    One of the major problems facing systems for Computer Aided Design (CAD), Architecture Engineering and Construction (AEC) and Geographic Information Systems (GIS) applications today is the lack of interoperability among the various systems. When integrating software applications, substantial di culties can arise in translating information from one application to the other. In this paper, we focus on semantic di culties that arise in software integration. Applications may use di erent terminologies to describe the same domain. Even when appli-cations use the same terminology, they often associate di erent semantics with the terms. This obstructs information exchange among applications. To cir-cumvent this obstacle, we need some way of explicitly specifying the semantics for each terminology in an unambiguous fashion. Ontologies can provide such specification. It will be the task of this paper to explain what ontologies are and how they can be used to facilitate interoperability between software systems used in computer aided design, architecture engineering and construction, and geographic information processing.
    Date
    3.12.2016 18:39:22
  7. Conde, A.; Larrañaga, M.; Arruarte, A.; Elorriaga, J.A.; Roth, D.: litewi: a combined term extraction and entity linking method for eliciting educational ontologies from textbooks (2016) 0.03
    0.03250457 = product of:
      0.13001828 = sum of:
        0.13001828 = sum of:
          0.09913012 = weight(_text_:terminology in 2645) [ClassicSimilarity], result of:
            0.09913012 = score(doc=2645,freq=4.0), product of:
              0.24053115 = queryWeight, product of:
                5.2752647 = idf(docFreq=614, maxDocs=44218)
                0.045596033 = queryNorm
              0.41213006 = fieldWeight in 2645, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.2752647 = idf(docFreq=614, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2645)
          0.030888164 = weight(_text_:22 in 2645) [ClassicSimilarity], result of:
            0.030888164 = score(doc=2645,freq=2.0), product of:
              0.15966953 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045596033 = queryNorm
              0.19345059 = fieldWeight in 2645, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2645)
      0.25 = coord(1/4)
    
    Abstract
    Major efforts have been conducted on ontology learning, that is, semiautomatic processes for the construction of domain ontologies from diverse sources of information. In the past few years, a research trend has focused on the construction of educational ontologies, that is, ontologies to be used for educational purposes. The identification of the terminology is crucial to build ontologies. Term extraction techniques allow the identification of the domain-related terms from electronic resources. This paper presents LiTeWi, a novel method that combines current unsupervised term extraction approaches for creating educational ontologies for technology supported learning systems from electronic textbooks. LiTeWi uses Wikipedia as an additional information source. Wikipedia contains more than 30 million articles covering the terminology of nearly every domain in 288 languages, which makes it an appropriate generic corpus for term extraction. Furthermore, given that its content is available in several languages, it promotes both domain and language independence. LiTeWi is aimed at being used by teachers, who usually develop their didactic material from textbooks. To evaluate its performance, LiTeWi was tuned up using a textbook on object oriented programming and then tested with two textbooks of different domains-astronomy and molecular biology.
    Date
    22. 1.2016 12:38:14
  8. Jacobs, I.: From chaos, order: W3C standard helps organize knowledge : SKOS Connects Diverse Knowledge Organization Systems to Linked Data (2009) 0.03
    0.027432326 = product of:
      0.109729305 = sum of:
        0.109729305 = weight(_text_:headings in 3062) [ClassicSimilarity], result of:
          0.109729305 = score(doc=3062,freq=14.0), product of:
            0.22113821 = queryWeight, product of:
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.045596033 = queryNorm
            0.49620238 = fieldWeight in 3062, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3062)
      0.25 = coord(1/4)
    
    Abstract
    18 August 2009 -- Today W3C announces a new standard that builds a bridge between the world of knowledge organization systems - including thesauri, classifications, subject headings, taxonomies, and folksonomies - and the linked data community, bringing benefits to both. Libraries, museums, newspapers, government portals, enterprises, social networking applications, and other communities that manage large collections of books, historical artifacts, news reports, business glossaries, blog entries, and other items can now use Simple Knowledge Organization System (SKOS) to leverage the power of linked data. As different communities with expertise and established vocabularies use SKOS to integrate them into the Semantic Web, they increase the value of the information for everyone.
    Content
    SKOS Adapts to the Diversity of Knowledge Organization Systems A useful starting point for understanding the role of SKOS is the set of subject headings published by the US Library of Congress (LOC) for categorizing books, videos, and other library resources. These headings can be used to broaden or narrow queries for discovering resources. For instance, one can narrow a query about books on "Chinese literature" to "Chinese drama," or further still to "Chinese children's plays." Library of Congress subject headings have evolved within a community of practice over a period of decades. By now publishing these subject headings in SKOS, the Library of Congress has made them available to the linked data community, which benefits from a time-tested set of concepts to re-use in their own data. This re-use adds value ("the network effect") to the collection. When people all over the Web re-use the same LOC concept for "Chinese drama," or a concept from some other vocabulary linked to it, this creates many new routes to the discovery of information, and increases the chances that relevant items will be found. As an example of mapping one vocabulary to another, a combined effort from the STITCH, TELplus and MACS Projects provides links between LOC concepts and RAMEAU, a collection of French subject headings used by the Bibliothèque Nationale de France and other institutions. SKOS can be used for subject headings but also many other approaches to organizing knowledge. Because different communities are comfortable with different organization schemes, SKOS is designed to port diverse knowledge organization systems to the Web. "Active participation from the library and information science community in the development of SKOS over the past seven years has been key to ensuring that SKOS meets a variety of needs," said Thomas Baker, co-chair of the Semantic Web Deployment Working Group, which published SKOS. "One goal in creating SKOS was to provide new uses for well-established knowledge organization systems by providing a bridge to the linked data cloud." SKOS is part of the Semantic Web technology stack. Like the Web Ontology Language (OWL), SKOS can be used to define vocabularies. But the two technologies were designed to meet different needs. SKOS is a simple language with just a few features, tuned for sharing and linking knowledge organization systems such as thesauri and classification schemes. OWL offers a general and powerful framework for knowledge representation, where additional "rigor" can afford additional benefits (for instance, business rule processing). To get started with SKOS, see the SKOS Primer.
  9. Das, S.; Roy, S.: Faceted ontological model for brain tumour study (2016) 0.03
    0.025245935 = product of:
      0.10098374 = sum of:
        0.10098374 = sum of:
          0.07009558 = weight(_text_:terminology in 2831) [ClassicSimilarity], result of:
            0.07009558 = score(doc=2831,freq=2.0), product of:
              0.24053115 = queryWeight, product of:
                5.2752647 = idf(docFreq=614, maxDocs=44218)
                0.045596033 = queryNorm
              0.29141995 = fieldWeight in 2831, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.2752647 = idf(docFreq=614, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2831)
          0.030888164 = weight(_text_:22 in 2831) [ClassicSimilarity], result of:
            0.030888164 = score(doc=2831,freq=2.0), product of:
              0.15966953 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045596033 = queryNorm
              0.19345059 = fieldWeight in 2831, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2831)
      0.25 = coord(1/4)
    
    Abstract
    The purpose of this work is to develop an ontology-based framework for developing an information retrieval system to cater to specific queries of users. For creating such an ontology, information was obtained from a wide range of information sources involved with brain tumour study and research. The information thus obtained was compiled and analysed to provide a standard, reliable and relevant information base to aid our proposed system. Facet-based methodology has been used for ontology formalization for quite some time. Ontology formalization involves different steps such as identification of the terminology, analysis, synthesis, standardization and ordering. A vast majority of the ontologies being developed nowadays lack flexibility. This becomes a formidable constraint when it comes to interoperability. We found that a facet-based method provides a distinct guideline for the development of a robust and flexible model concerning the domain of brain tumours. Our attempt has been to bridge library and information science and computer science, which itself involved an experimental approach. It was discovered that a faceted approach is really enduring, as it helps in the achievement of properties like navigation, exploration and faceted browsing. Computer-based brain tumour ontology supports the work of researchers towards gathering information on brain tumour research and allows users across the world to intelligently access new scientific information quickly and efficiently.
    Date
    12. 3.2016 13:21:22
  10. Miller, S.: Introduction to ontology concepts and terminology : DC-2013 Tutorial, September 2, 2013. (2013) 0.02
    0.02428182 = product of:
      0.09712728 = sum of:
        0.09712728 = product of:
          0.19425456 = sum of:
            0.19425456 = weight(_text_:terminology in 1075) [ClassicSimilarity], result of:
              0.19425456 = score(doc=1075,freq=6.0), product of:
                0.24053115 = queryWeight, product of:
                  5.2752647 = idf(docFreq=614, maxDocs=44218)
                  0.045596033 = queryNorm
                0.8076067 = fieldWeight in 1075, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.2752647 = idf(docFreq=614, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1075)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Content
    Tutorial topics and outline 1. Tutorial Background Overview The Semantic Web, Linked Data, and the Resource Description Framework 2. Ontology Basics and RDFS Tutorial Semantic modeling, domain ontologies, and RDF Vocabulary Description Language (RDFS) concepts and terminology Examples: domain ontologies, models, and schemas Exercises 3. OWL Overview Tutorial Web Ontology Language (OWL): selected concepts and terminology Exercises
  11. Peponakis, M.; Mastora, A.; Kapidakis, S.; Doerr, M.: Expressiveness and machine processability of Knowledge Organization Systems (KOS) : an analysis of concepts and relations (2020) 0.02
    0.020947421 = product of:
      0.083789684 = sum of:
        0.083789684 = weight(_text_:headings in 5787) [ClassicSimilarity], result of:
          0.083789684 = score(doc=5787,freq=4.0), product of:
            0.22113821 = queryWeight, product of:
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.045596033 = queryNorm
            0.3789019 = fieldWeight in 5787, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5787)
      0.25 = coord(1/4)
    
    Abstract
    This study considers the expressiveness (that is the expressive power or expressivity) of different types of Knowledge Organization Systems (KOS) and discusses its potential to be machine-processable in the context of the Semantic Web. For this purpose, the theoretical foundations of KOS are reviewed based on conceptualizations introduced by the Functional Requirements for Subject Authority Data (FRSAD) and the Simple Knowledge Organization System (SKOS); natural language processing techniques are also implemented. Applying a comparative analysis, the dataset comprises a thesaurus (Eurovoc), a subject headings system (LCSH) and a classification scheme (DDC). These are compared with an ontology (CIDOC-CRM) by focusing on how they define and handle concepts and relations. It was observed that LCSH and DDC focus on the formalism of character strings (nomens) rather than on the modelling of semantics; their definition of what constitutes a concept is quite fuzzy, and they comprise a large number of complex concepts. By contrast, thesauri have a coherent definition of what constitutes a concept, and apply a systematic approach to the modelling of relations. Ontologies explicitly define diverse types of relations, and are by their nature machine-processable. The paper concludes that the potential of both the expressiveness and machine processability of each KOS is extensively regulated by its structural rules. It is harder to represent subject headings and classification schemes as semantic networks with nodes and arcs, while thesauri are more suitable for such a representation. In addition, a paradigm shift is revealed which focuses on the modelling of relations between concepts, rather than the concepts themselves.
  12. Green, R.; Panzer, M.: ¬The ontological character of classes in the Dewey Decimal Classification 0.02
    0.020736888 = product of:
      0.08294755 = sum of:
        0.08294755 = weight(_text_:headings in 3530) [ClassicSimilarity], result of:
          0.08294755 = score(doc=3530,freq=2.0), product of:
            0.22113821 = queryWeight, product of:
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.045596033 = queryNorm
            0.37509373 = fieldWeight in 3530, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3530)
      0.25 = coord(1/4)
    
    Abstract
    Classes in the Dewey Decimal Classification (DDC) system function as neighborhoods around focal topics in captions and notes. Topical neighborhoods are generated through specialization and instantiation, complex topic synthesis, index terms and mapped headings, hierarchical force, rules for choosing between numbers, development of the DDC over time, and use of the system in classifying resources. Implications of representation using a formal knowledge representation language are explored.
  13. Rolland-Thomas, P.: Thesaural codes : an appraisal of their use in the Library of Congress Subject Headings (1993) 0.02
    0.020524198 = product of:
      0.08209679 = sum of:
        0.08209679 = weight(_text_:headings in 549) [ClassicSimilarity], result of:
          0.08209679 = score(doc=549,freq=6.0), product of:
            0.22113821 = queryWeight, product of:
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.045596033 = queryNorm
            0.37124652 = fieldWeight in 549, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.03125 = fieldNorm(doc=549)
      0.25 = coord(1/4)
    
    Abstract
    LCSH is known as such since 1975. It always has created headings to serve the LC collections instead of a theoretical basis. It started to replace cross reference codes by thesaural codes in 1986, in a mechanical fashion. It was in no way transformed into a thesaurus. Its encyclopedic coverage, its pre-coordinate concepts make it substantially distinct, considering that thesauri usually map a restricted field of knowledge and use uniterms. The questions raised are whether the new symbols comply with thesaurus standards and if they are true to one or to several models. Explanations and definitions from other lists of subject headings and thesauri, literature in the field of classification and subject indexing will provide some answers. For instance, see refers from a subject heading not used to another or others used. Exceptionally it will lead from a specific term to a more general one. Some equate a see reference with the equivalence relationship. Such relationships are pointed by USE in LCSH. See also references are made from the broader subject to narrower parts of it and also between associated subjects. They suggest lateral or vertical connexions as well as reciprocal relationships. They serve a coordination purpose for some, lay down a methodical search itinerary for others. Since their inception in the 1950's thesauri have been devised for indexing and retrieving information in the fields of science and technology. Eventually they attended to a number of social sciences and humanities. Research derived from thesauri was voluminous. Numerous guidelines are designed. They did not discriminate between the "hard" sciences and the social sciences. RT relationships are widely but diversely used in numerous controlled vocabularies. LCSH's aim is to achieve a list almost free of RT and SA references. It thus restricts relationships to BT/NT, USE and UF. This raises the question as to whether all fields of knowledge can "fit" in the Procrustean bed of RT/NT, i.e., genus/species relationships. Standard codes were devised. It was soon realized that BT/NT, well suited to the genus/species couple could not signal a whole-part relationship. In LCSH, BT and NT function as reciprocals, the whole-part relationship is taken into account by ISO. It is amply elaborated upon by authors. The part-whole connexion is sometimes studied apart. The decision to replace cross reference codes was an improvement. Relations can now be distinguished through the distinct needs of numerous fields of knowledge are not attended to. Topic inclusion, and topic-subtopic, could provide the missing link where genus/species or whole/part are inadequate. Distinct codes, BT/NT and whole/part, should be provided. Sorting relationships with mechanical means can only lead to confusion.
  14. Schmitz-Esser, W.; Sigel, A.: Introducing terminology-based ontologies : Papers and Materials presented by the authors at the workshop "Introducing Terminology-based Ontologies" (Poli/Schmitz-Esser/Sigel) at the 9th International Conference of the International Society for Knowledge Organization (ISKO), Vienna, Austria, July 6th, 2006 (2006) 0.02
    0.018211365 = product of:
      0.07284546 = sum of:
        0.07284546 = product of:
          0.14569092 = sum of:
            0.14569092 = weight(_text_:terminology in 1285) [ClassicSimilarity], result of:
              0.14569092 = score(doc=1285,freq=6.0), product of:
                0.24053115 = queryWeight, product of:
                  5.2752647 = idf(docFreq=614, maxDocs=44218)
                  0.045596033 = queryNorm
                0.605705 = fieldWeight in 1285, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.2752647 = idf(docFreq=614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1285)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This work-in-progress communication contains the papers and materials presented by Winfried Schmitz-Esser and Alexander Sigel in the joint workshop (with Roberto Poli) "Introducing Terminology-based Ontologies" at the 9th International Conference of the International Society for Knowledge Organization (ISKO), Vienna, Austria, July 6th, 2006.
  15. Gil-Berrozpe, J.C.: Description, categorization, and representation of hyponymy in environmental terminology (2022) 0.02
    0.015673848 = product of:
      0.06269539 = sum of:
        0.06269539 = product of:
          0.12539078 = sum of:
            0.12539078 = weight(_text_:terminology in 1004) [ClassicSimilarity], result of:
              0.12539078 = score(doc=1004,freq=10.0), product of:
                0.24053115 = queryWeight, product of:
                  5.2752647 = idf(docFreq=614, maxDocs=44218)
                  0.045596033 = queryNorm
                0.5213079 = fieldWeight in 1004, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  5.2752647 = idf(docFreq=614, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1004)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Terminology has evolved from static and prescriptive theories to dynamic and cognitive approaches. Thanks to these approaches, there have been significant advances in the design and elaboration of terminological resources. This has resulted in the creation of tools such as terminological knowledge bases, which are able to show how concepts are interrelated through different semantic or conceptual relations. Of these relations, hyponymy is the most relevant to terminology work because it deals with concept categorization and term hierarchies. This doctoral thesis presents an enhancement of the semantic structure of EcoLexicon, a terminological knowledge base on environmental science. The aim of this research was to improve the description, categorization, and representation of hyponymy in environmental terminology. Therefore, we created HypoLexicon, a new stand-alone module for EcoLexicon in the form of a hyponymy-based terminological resource. This resource contains twelve terminological entries from four specialized domains (Biology, Chemistry, Civil Engineering, and Geology), which consist of 309 concepts and 465 terms associated with those concepts. This research was mainly based on the theoretical premises of Frame-based Terminology. This theory was combined with Cognitive Linguistics, for conceptual description and representation; Corpus Linguistics, for the extraction and processing of linguistic and terminological information; and Ontology, related to hyponymy and relevant for concept categorization. HypoLexicon was constructed from the following materials: (i) the EcoLexicon English Corpus; (ii) other specialized terminological resources, including EcoLexicon; (iii) Sketch Engine; and (iv) Lexonomy. This thesis explains the methodologies applied for corpus extraction and compilation, corpus analysis, the creation of conceptual hierarchies, and the design of the terminological template. The results of the creation of HypoLexicon are discussed by highlighting the information in the hyponymy-based terminological entries: (i) parent concept (hypernym); (ii) child concepts (hyponyms, with various hyponymy levels); (iii) terminological definitions; (iv) conceptual categories; (v) hyponymy subtypes; and (vi) hyponymic contexts. Furthermore, the features and the navigation within HypoLexicon are described from the user interface and the admin interface. In conclusion, this doctoral thesis lays the groundwork for developing a terminological resource that includes definitional, relational, ontological and contextual information about specialized hypernyms and hyponyms. All of this information on specialized knowledge is simple to follow thanks to the hierarchical structure of the terminological template used in HypoLexicon. Therefore, not only does it enhance knowledge representation, but it also facilitates its acquisition.
  16. Soergel, D.: SemWeb: Proposal for an Open, multifunctional, multilingual system for integrated access to knowledge about concepts and terminology : exploration and development of the concept (1996) 0.02
    0.015176138 = product of:
      0.06070455 = sum of:
        0.06070455 = product of:
          0.1214091 = sum of:
            0.1214091 = weight(_text_:terminology in 3576) [ClassicSimilarity], result of:
              0.1214091 = score(doc=3576,freq=6.0), product of:
                0.24053115 = queryWeight, product of:
                  5.2752647 = idf(docFreq=614, maxDocs=44218)
                  0.045596033 = queryNorm
                0.5047542 = fieldWeight in 3576, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.2752647 = idf(docFreq=614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3576)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This paper presents a proposal for the long-range development of an open, multifunctional, multilingual system for integrated access to many kinds of knowledge about concepts and terminology. The system would draw on existing knowledge bases that are accessible through the Internet or on CD-ROM an on a common integrated distributed knowledge base that would grow incrementally over time. Existing knowledge bases would be accessed through a common interface that would search several knowledge bases, collate the data into a common format, and present them to the user. The common integrated distributed knowledge base would provide an environment in which many contributors could carry out classification and terminological projects more efficiently, with the results available in a common format. Over time, data from other knowledge bases could be incorporated into the common knowledge base, either by actual transfer (provided the knowledge base producers are willing) or by reference through a link. Either way, such incorporation requires intellectual work but allows for tighter integration than common interface access to multiple knowledge bases. Each piece of information in the common knowledge base will have all its sources attached, providing an acknowledgment mechanism that gives due credit to all contributors. The whole system woul be designed to be usable by many levels of users for improved information exchange.
    Content
    Expanded version of a paper published in Advances in Knowledge Organization v.5 (1996): 165-173 (4th Annual ISKO Conference, Washington, D.C., 1996 July 15-18): SemWeb: proposal for an open, multifunctional, multilingual system for integrated access to knowledge about concepts and terminology.
  17. Schmitz-Esser, W.: Formalizing terminology-based knowledge for an ontology independently of a particular language (2008) 0.01
    0.014869518 = product of:
      0.05947807 = sum of:
        0.05947807 = product of:
          0.11895614 = sum of:
            0.11895614 = weight(_text_:terminology in 1680) [ClassicSimilarity], result of:
              0.11895614 = score(doc=1680,freq=4.0), product of:
                0.24053115 = queryWeight, product of:
                  5.2752647 = idf(docFreq=614, maxDocs=44218)
                  0.045596033 = queryNorm
                0.49455607 = fieldWeight in 1680, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2752647 = idf(docFreq=614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1680)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Last word ontological thought and practice is exemplified on an axiomatic framework [a model for an Integrative Cross-Language Ontology (ICLO), cf. Poli, R., Schmitz-Esser, W., forthcoming 2007] that is highly general, based on natural language, multilingual, can be implemented as topic maps and may be openly enhanced by software available for particular languages. Basics of ontological modelling, conditions for construction and maintenance, and the most salient points in application are addressed, such as cross-language text mining and knowledge generation. The rationale is to open the eyes for the tremendous potential of terminology-based ontologies for principled Knowledge Organization and the interchange and reuse of formalized knowledge.
  18. Buente, W.; Baybayan, C.K.; Hajibayova, L.; McCorkhill, M.; Panchyshyn, R.: Exploring the renaissance of wayfinding and voyaging through the lens of knowledge representation, organization and discovery systems (2020) 0.01
    0.014812064 = product of:
      0.059248257 = sum of:
        0.059248257 = weight(_text_:headings in 105) [ClassicSimilarity], result of:
          0.059248257 = score(doc=105,freq=2.0), product of:
            0.22113821 = queryWeight, product of:
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.045596033 = queryNorm
            0.2679241 = fieldWeight in 105, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.0390625 = fieldNorm(doc=105)
      0.25 = coord(1/4)
    
    Abstract
    The purpose of this paper is to provide a critical analysis from an ethical perspective of how the concept of indigenous wayfinding and voyaging is mapped in knowledge representation, organization and discovery systems. Design/methodology/approach In this study, the Dewey Decimal Classification, the Library of Congress Subject Headings, the Library of Congress Classifications systems and the Web of Science citation database were methodically examined to determine how these systems represent and facilitate the discovery of indigenous knowledge of wayfinding and voyaging. Findings The analysis revealed that there was no dedicated representation of the indigenous practices of wayfinding and voyaging in the major knowledge representation, organization and discovery systems. By scattering indigenous practice across various, often very broad and unrelated classes, coherence in the record is disrupted, resulting in misrepresentation of these indigenous concepts. Originality/value This study contributes to a relatively limited research literature on representation and organization of indigenous knowledge of wayfinding and voyaging. This study calls to foster a better understanding and appreciation for the rich knowledge that indigenous cultures provide for an enlightened society.
  19. Thellefsen, M.: ¬The dynamics of information representation and knowledge mediation (2006) 0.01
    0.014019116 = product of:
      0.056076463 = sum of:
        0.056076463 = product of:
          0.11215293 = sum of:
            0.11215293 = weight(_text_:terminology in 170) [ClassicSimilarity], result of:
              0.11215293 = score(doc=170,freq=2.0), product of:
                0.24053115 = queryWeight, product of:
                  5.2752647 = idf(docFreq=614, maxDocs=44218)
                  0.045596033 = queryNorm
                0.46627194 = fieldWeight in 170, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2752647 = idf(docFreq=614, maxDocs=44218)
                  0.0625 = fieldNorm(doc=170)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This paper present an alternative approach to knowledge organization based on semiotic reasoning. The semantic distance between domain specific terminology and KOS is analyzed by means of their different sign systems. It is argued that a faceted approach may provide the means needed to minimize the gap between knowledge domains and KOS.
  20. Mustafa El Hadi, W.: Terminologies, ontologies and information access (2006) 0.01
    0.014019116 = product of:
      0.056076463 = sum of:
        0.056076463 = product of:
          0.11215293 = sum of:
            0.11215293 = weight(_text_:terminology in 1488) [ClassicSimilarity], result of:
              0.11215293 = score(doc=1488,freq=2.0), product of:
                0.24053115 = queryWeight, product of:
                  5.2752647 = idf(docFreq=614, maxDocs=44218)
                  0.045596033 = queryNorm
                0.46627194 = fieldWeight in 1488, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2752647 = idf(docFreq=614, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1488)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Ontologies have become an important issue in research communities across several disciplines. This paper discusses some of the innovative techniques involving automatic terminology resources acquisition are briefly discussed. Suggests that NLP-based ontologies are useful in reducing the cost of ontology engineering. Emphasizes that linguistic ontologies covering both ontological and lexical information can offer solutions since they can be more easily updated by the resources of NLP products.

Authors

Years

Languages

  • e 72
  • d 11

Types

  • a 58
  • el 24
  • x 5
  • m 4
  • n 2
  • p 1
  • r 1
  • s 1
  • More… Less…