Search (148 results, page 1 of 8)

  • × theme_ss:"Wissensrepräsentation"
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.07
    0.065550044 = product of:
      0.09832506 = sum of:
        0.08114894 = product of:
          0.2434468 = sum of:
            0.2434468 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.2434468 = score(doc=400,freq=2.0), product of:
                0.43316546 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.051092815 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.017176118 = product of:
          0.034352235 = sum of:
            0.034352235 = weight(_text_:classification in 400) [ClassicSimilarity], result of:
              0.034352235 = score(doc=400,freq=2.0), product of:
                0.16271563 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.051092815 = queryNorm
                0.21111822 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  2. Sperber, W.; Ion, P.D.F.: Content analysis and classification in mathematics (2011) 0.06
    0.059826493 = product of:
      0.08973974 = sum of:
        0.05133277 = weight(_text_:bibliographic in 4818) [ClassicSimilarity], result of:
          0.05133277 = score(doc=4818,freq=2.0), product of:
            0.19890657 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.051092815 = queryNorm
            0.2580748 = fieldWeight in 4818, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=4818)
        0.038406964 = product of:
          0.07681393 = sum of:
            0.07681393 = weight(_text_:classification in 4818) [ClassicSimilarity], result of:
              0.07681393 = score(doc=4818,freq=10.0), product of:
                0.16271563 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.051092815 = queryNorm
                0.4720747 = fieldWeight in 4818, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4818)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The number of publications in mathematics increases faster each year. Presently far more than 100,000 mathematically relevant journal articles and books are published annually. Efficient and high-quality content analysis of this material is important for mathematical bibliographic services such as ZBMath or MathSciNet. Content analysis has different facets and levels: classification, keywords, abstracts and reviews, and (in the future) formula analysis. It is the opinion of the authors that the different levels have to be enhanced and combined using the methods and technology of the Semantic Web. In the presentation, the problems and deficits of the existing methods and tools, the state of the art and current activities are discussed. As a first step, the Mathematical Subject Classification Scheme (MSC), has been encoded with Simple Knowledge Organization System (SKOS) and Resource Description Framework (RDF) at its recent revision to MSC2010. The use of SKOS principally opens new possibilities for the enrichment and wider deployment of this classification scheme and for machine-based content analysis of mathematical publications.
    Source
    Classification and ontology: formal approaches and access to knowledge: proceedings of the International UDC Seminar, 19-20 September 2011, The Hague, The Netherlands. Eds.: A. Slavic u. E. Civallero
  3. Buizza, G.: Subject analysis and indexing : an "Italian version" of the analytico-synthetic model (2011) 0.05
    0.045672596 = product of:
      0.06850889 = sum of:
        0.05133277 = weight(_text_:bibliographic in 1812) [ClassicSimilarity], result of:
          0.05133277 = score(doc=1812,freq=2.0), product of:
            0.19890657 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.051092815 = queryNorm
            0.2580748 = fieldWeight in 1812, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=1812)
        0.017176118 = product of:
          0.034352235 = sum of:
            0.034352235 = weight(_text_:classification in 1812) [ClassicSimilarity], result of:
              0.034352235 = score(doc=1812,freq=2.0), product of:
                0.16271563 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.051092815 = queryNorm
                0.21111822 = fieldWeight in 1812, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1812)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Series
    IFLA series on bibliographic control; vol. 42
    Source
    Subject access: preparing for the future. Conference on August 20 - 21, 2009 in Florence, the IFLA Classification and Indexing Section sponsored an IFLA satellite conference entitled "Looking at the Past and Preparing for the Future". Eds.: P. Landry et al
  4. Broughton, V.: Language related problems in the construction of faceted terminologies and their automatic management (2008) 0.05
    0.045045935 = product of:
      0.0675689 = sum of:
        0.04277731 = weight(_text_:bibliographic in 2497) [ClassicSimilarity], result of:
          0.04277731 = score(doc=2497,freq=2.0), product of:
            0.19890657 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.051092815 = queryNorm
            0.21506234 = fieldWeight in 2497, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2497)
        0.024791589 = product of:
          0.049583178 = sum of:
            0.049583178 = weight(_text_:classification in 2497) [ClassicSimilarity], result of:
              0.049583178 = score(doc=2497,freq=6.0), product of:
                0.16271563 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.051092815 = queryNorm
                0.3047229 = fieldWeight in 2497, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2497)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    The paper describes current work on the generation of a thesaurus format from the schedules of the Bliss Bibliographic Classification 2nd edition (BC2). The practical problems that occur in moving from a concept based approach to a terminological approach cluster around issues of vocabulary control that are not fully addressed in a systematic structure. These difficulties can be exacerbated within domains in the humanities because large numbers of culture specific terms may need to be accommodated in any thesaurus. The ways in which these problems can be resolved within the context of a semi-automated approach to the thesaurus generation have consequences for the management of classification data in the source vocabulary. The way in which the vocabulary is marked up for the purpose of machine manipulation is described, and some of the implications for editorial policy are discussed and examples given. The value of the classification notation as a language independent representation and mapping tool should not be sacrificed in such an exercise.
  5. Broughton, V.: Facet analysis as a tool for modelling subject domains and terminologies (2011) 0.04
    0.04201304 = product of:
      0.06301956 = sum of:
        0.04277731 = weight(_text_:bibliographic in 4826) [ClassicSimilarity], result of:
          0.04277731 = score(doc=4826,freq=2.0), product of:
            0.19890657 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.051092815 = queryNorm
            0.21506234 = fieldWeight in 4826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4826)
        0.020242248 = product of:
          0.040484495 = sum of:
            0.040484495 = weight(_text_:classification in 4826) [ClassicSimilarity], result of:
              0.040484495 = score(doc=4826,freq=4.0), product of:
                0.16271563 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.051092815 = queryNorm
                0.24880521 = fieldWeight in 4826, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4826)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Facet analysis is proposed as a general theory of knowledge organization, with an associated methodology that may be applied to the development of terminology tools in a variety of contexts and formats. Faceted classifications originated as a means of representing complexity in semantic content that facilitates logical organization and effective retrieval in a physical environment. This is achieved through meticulous analysis of concepts, their structural and functional status (based on fundamental categories), and their inter-relationships. These features provide an excellent basis for the general conceptual modelling of domains, and for the generation of KOS other than systematic classifications. This is demonstrated by the adoption of a faceted approach to many web search and visualization tools, and by the emergence of a facet based methodology for the construction of thesauri. Current work on the Bliss Bibliographic Classification (Second Edition) is investigating the ways in which the full complexity of faceted structures may be represented through encoded data, capable of generating intellectually and mechanically compatible forms of indexing tools from a single source. It is suggested that a number of research questions relating to the Semantic Web could be tackled through the medium of facet analysis.
    Source
    Classification and ontology: formal approaches and access to knowledge: proceedings of the International UDC Seminar, 19-20 September 2011, The Hague, The Netherlands. Eds.: A. Slavic u. E. Civallero
  6. Priss, U.: Description logic and faceted knowledge representation (1999) 0.03
    0.030038541 = product of:
      0.09011562 = sum of:
        0.09011562 = sum of:
          0.048581395 = weight(_text_:classification in 2655) [ClassicSimilarity], result of:
            0.048581395 = score(doc=2655,freq=4.0), product of:
              0.16271563 = queryWeight, product of:
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.051092815 = queryNorm
              0.29856625 = fieldWeight in 2655, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.046875 = fieldNorm(doc=2655)
          0.041534226 = weight(_text_:22 in 2655) [ClassicSimilarity], result of:
            0.041534226 = score(doc=2655,freq=2.0), product of:
              0.17891833 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051092815 = queryNorm
              0.23214069 = fieldWeight in 2655, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2655)
      0.33333334 = coord(1/3)
    
    Abstract
    The term "facet" was introduced into the field of library classification systems by Ranganathan in the 1930's [Ranganathan, 1962]. A facet is a viewpoint or aspect. In contrast to traditional classification systems, faceted systems are modular in that a domain is analyzed in terms of baseline facets which are then synthesized. In this paper, the term "facet" is used in a broader meaning. Facets can describe different aspects on the same level of abstraction or the same aspect on different levels of abstraction. The notion of facets is related to database views, multicontexts and conceptual scaling in formal concept analysis [Ganter and Wille, 1999], polymorphism in object-oriented design, aspect-oriented programming, views and contexts in description logic and semantic networks. This paper presents a definition of facets in terms of faceted knowledge representation that incorporates the traditional narrower notion of facets and potentially facilitates translation between different knowledge representation formalisms. A goal of this approach is a modular, machine-aided knowledge base design mechanism. A possible application is faceted thesaurus construction for information retrieval and data mining. Reasoning complexity depends on the size of the modules (facets). A more general analysis of complexity will be left for future research.
    Date
    22. 1.2016 17:30:31
  7. Madalli, D.P.; Balaji, B.P.; Sarangi, A.K.: Music domain analysis for building faceted ontological representation (2014) 0.03
    0.029511403 = product of:
      0.088534206 = sum of:
        0.088534206 = sum of:
          0.040077604 = weight(_text_:classification in 1437) [ClassicSimilarity], result of:
            0.040077604 = score(doc=1437,freq=2.0), product of:
              0.16271563 = queryWeight, product of:
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.051092815 = queryNorm
              0.24630459 = fieldWeight in 1437, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1437)
          0.048456598 = weight(_text_:22 in 1437) [ClassicSimilarity], result of:
            0.048456598 = score(doc=1437,freq=2.0), product of:
              0.17891833 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051092815 = queryNorm
              0.2708308 = fieldWeight in 1437, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1437)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper describes to construct faceted ontologies for domain modeling. Building upon the faceted theory of S.R. Ranganathan (1967), the paper intends to address the faceted classification approach applied to build domain ontologies. As classificatory ontologies are employed to represent the relationships of entities and objects on the web, the faceted approach helps to analyze domain representation in an effective way for modeling. Based on this perspective, an ontology of the music domain has been analyzed that would serve as a case study.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  8. Sebastian, Y.: Literature-based discovery by learning heterogeneous bibliographic information networks (2017) 0.03
    0.025507461 = product of:
      0.07652238 = sum of:
        0.07652238 = weight(_text_:bibliographic in 535) [ClassicSimilarity], result of:
          0.07652238 = score(doc=535,freq=10.0), product of:
            0.19890657 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.051092815 = queryNorm
            0.3847152 = fieldWeight in 535, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03125 = fieldNorm(doc=535)
      0.33333334 = coord(1/3)
    
    Abstract
    Literature-based discovery (LBD) research aims at finding effective computational methods for predicting previously unknown connections between clusters of research papers from disparate research areas. Existing methods encompass two general approaches. The first approach searches for these unknown connections by examining the textual contents of research papers. In addition to the existing textual features, the second approach incorporates structural features of scientific literatures, such as citation structures. These approaches, however, have not considered research papers' latent bibliographic metadata structures as important features that can be used for predicting previously unknown relationships between them. This thesis investigates a new graph-based LBD method that exploits the latent bibliographic metadata connections between pairs of research papers. The heterogeneous bibliographic information network is proposed as an efficient graph-based data structure for modeling the complex relationships between these metadata. In contrast to previous approaches, this method seamlessly combines textual and citation information in the form of pathbased metadata features for predicting future co-citation links between research papers from disparate research fields. The results reported in this thesis provide evidence that the method is effective for reconstructing the historical literature-based discovery hypotheses. This thesis also investigates the effects of semantic modeling and topic modeling on the performance of the proposed method. For semantic modeling, a general-purpose word sense disambiguation technique is proposed to reduce the lexical ambiguity in the title and abstract of research papers. The experimental results suggest that the reduced lexical ambiguity did not necessarily lead to a better performance of the method. This thesis discusses some of the possible contributing factors to these results. Finally, topic modeling is used for learning the latent topical relations between research papers. The learned topic model is incorporated into the heterogeneous bibliographic information network graph and allows new predictive features to be learned. The results in this thesis suggest that topic modeling improves the performance of the proposed method by increasing the overall accuracy for predicting the future co-citation links between disparate research papers.
  9. Mahesh, K.: Highly expressive tagging for knowledge organization in the 21st century (2014) 0.03
    0.025032118 = product of:
      0.075096354 = sum of:
        0.075096354 = sum of:
          0.040484495 = weight(_text_:classification in 1434) [ClassicSimilarity], result of:
            0.040484495 = score(doc=1434,freq=4.0), product of:
              0.16271563 = queryWeight, product of:
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.051092815 = queryNorm
              0.24880521 = fieldWeight in 1434, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1434)
          0.034611855 = weight(_text_:22 in 1434) [ClassicSimilarity], result of:
            0.034611855 = score(doc=1434,freq=2.0), product of:
              0.17891833 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051092815 = queryNorm
              0.19345059 = fieldWeight in 1434, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1434)
      0.33333334 = coord(1/3)
    
    Abstract
    Knowledge organization of large-scale content on the Web requires substantial amounts of semantic metadata that is expensive to generate manually. Recent developments in Web technologies have enabled any user to tag documents and other forms of content thereby generating metadata that could help organize knowledge. However, merely adding one or more tags to a document is highly inadequate to capture the aboutness of the document and thereby to support powerful semantic functions such as automatic classification, question answering or true semantic search and retrieval. This is true even when the tags used are labels from a well-designed classification system such as a thesaurus or taxonomy. There is a strong need to develop a semantic tagging mechanism with sufficient expressive power to capture the aboutness of each part of a document or dataset or multimedia content in order to enable applications that can benefit from knowledge organization on the Web. This article proposes a highly expressive mechanism of using ontology snippets as semantic tags that map portions of a document or a part of a dataset or a segment of a multimedia content to concepts and relations in an ontology of the domain(s) of interest.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  10. Kruk, S.R.; Cygan, M.; Gzella, A.; Woroniecki, T.; Dabrowski, M.: JeromeDL: the social semantic digital library (2009) 0.02
    0.024198502 = product of:
      0.07259551 = sum of:
        0.07259551 = weight(_text_:bibliographic in 3383) [ClassicSimilarity], result of:
          0.07259551 = score(doc=3383,freq=4.0), product of:
            0.19890657 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.051092815 = queryNorm
            0.3649729 = fieldWeight in 3383, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=3383)
      0.33333334 = coord(1/3)
    
    Abstract
    The initial research on semantic digital libraries resulted in the design and implementation of JeromeDL; current research on online social networking and information discovery delivered new sets of features that were implemented in JeromeDL. Eventually, this digital library has been redesigned to follow the architecture of a social semantic digital library. JeromeDL describes each resource using three types of metadata: structure, bibliographic and community. It delivers services leveraging each of these information types. Annotations based on the structure and legacy metadata, and bibliographic ontology are rendered to the users in one, mixed, representation of library resources. Community annotations are managed by separate services, such as social semantic collaborative filtering or blogging component
  11. Zeng, M.L.; Fan, W.; Lin, X.: SKOS for an integrated vocabulary structure (2008) 0.02
    0.023848813 = product of:
      0.071546435 = sum of:
        0.071546435 = sum of:
          0.032387596 = weight(_text_:classification in 2654) [ClassicSimilarity], result of:
            0.032387596 = score(doc=2654,freq=4.0), product of:
              0.16271563 = queryWeight, product of:
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.051092815 = queryNorm
              0.19904417 = fieldWeight in 2654, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.03125 = fieldNorm(doc=2654)
          0.039158843 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
            0.039158843 = score(doc=2654,freq=4.0), product of:
              0.17891833 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051092815 = queryNorm
              0.21886435 = fieldWeight in 2654, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2654)
      0.33333334 = coord(1/3)
    
    Abstract
    In order to transfer the Chinese Classified Thesaurus (CCT) into a machine-processable format and provide CCT-based Web services, a pilot study has been conducted in which a variety of selected CCT classes and mapped thesaurus entries are encoded with SKOS. OWL and RDFS are also used to encode the same contents for the purposes of feasibility and cost-benefit comparison. CCT is a collected effort led by the National Library of China. It is an integration of the national standards Chinese Library Classification (CLC) 4th edition and Chinese Thesaurus (CT). As a manually created mapping product, CCT provides for each of the classes the corresponding thesaurus terms, and vice versa. The coverage of CCT includes four major clusters: philosophy, social sciences and humanities, natural sciences and technologies, and general works. There are 22 main-classes, 52,992 sub-classes and divisions, 110,837 preferred thesaurus terms, 35,690 entry terms (non-preferred terms), and 59,738 pre-coordinated headings (Chinese Classified Thesaurus, 2005) Major challenges of encoding this large vocabulary comes from its integrated structure. CCT is a result of the combination of two structures (illustrated in Figure 1): a thesaurus that uses ISO-2788 standardized structure and a classification scheme that is basically enumerative, but provides some flexibility for several kinds of synthetic mechanisms Other challenges include the complex relationships caused by differences of granularities of two original schemes and their presentation with various levels of SKOS elements; as well as the diverse coordination of entries due to the use of auxiliary tables and pre-coordinated headings derived from combining classes, subdivisions, and thesaurus terms, which do not correspond to existing unique identifiers. The poster reports the progress, shares the sample SKOS entries, and summarizes problems identified during the SKOS encoding process. Although OWL Lite and OWL Full provide richer expressiveness, the cost-benefit issues and the final purposes of encoding CCT raise questions of using such approaches.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  12. Frâncu, V.: Subjects in FRBR and poly-hierarchical thesauri as possible knowledge organization tools (2006) 0.02
    0.020165417 = product of:
      0.060496252 = sum of:
        0.060496252 = weight(_text_:bibliographic in 259) [ClassicSimilarity], result of:
          0.060496252 = score(doc=259,freq=4.0), product of:
            0.19890657 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.051092815 = queryNorm
            0.30414405 = fieldWeight in 259, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=259)
      0.33333334 = coord(1/3)
    
    Abstract
    The paper presents the possibilities offered by poly-hierarchical conceptual structures as knowledge organizers, starting from the FRBR entity-relation model. Of the ten entities defined in the FRBR model, the first six, the bibliographic entities plus those representing the intellectual responsibilities, are clearly described by their attributes. Unlike those the other four representing subjects in their own right: concepts, objects, events and places only have the term for the entity as attribute. Subjects have to be more extensively treated in a revised version of the FRBR model, with particular attention for the semantic and syntactic relations between concepts representing subjects themselves and between these concepts and terms used in indexing. The conceptual model of poly-hierarchical thesauri is regarded as an entity-relation model, one capable to accommodate both conceptually and relationally subjects in the bibliographic universe. Polyhierarchical thesauri are considered as frameworks or templates meant to enhance knowledge representation and to support information searching.
  13. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.02
    0.018033098 = product of:
      0.05409929 = sum of:
        0.05409929 = product of:
          0.16229787 = sum of:
            0.16229787 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.16229787 = score(doc=701,freq=2.0), product of:
                0.43316546 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.051092815 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  14. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.02
    0.018033098 = product of:
      0.05409929 = sum of:
        0.05409929 = product of:
          0.16229787 = sum of:
            0.16229787 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.16229787 = score(doc=5820,freq=2.0), product of:
                0.43316546 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.051092815 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  15. Becker, H.-G.; Förster, F.: Vernetztes Wissen : Ereignisse in der bibliografischen Dokumentation (2010) 0.02
    0.017110925 = product of:
      0.05133277 = sum of:
        0.05133277 = weight(_text_:bibliographic in 3494) [ClassicSimilarity], result of:
          0.05133277 = score(doc=3494,freq=2.0), product of:
            0.19890657 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.051092815 = queryNorm
            0.2580748 = fieldWeight in 3494, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=3494)
      0.33333334 = coord(1/3)
    
    Abstract
    Innerhalb der Gedächtnisinstitutionen Bibliothek, Museum und Archiv gibt es je eigene Beschreibungsmodelle der beherbergten Objekte und Materialien. Für eine genauere bibliografische Erschließung wurde im Bibliotheksbereich das von Benutzerbedürfnissen ausgehende, statische Modell "Functional Requirements for Bibliographic Records" (FRBR) geschaffen, dessen ungenauer »Werk«-Begriff ebenso thematisiert wird wie die schwer zu realisierende Übertragbarkeit des Modells auf Nicht-Buchmaterialien. Die Museumswelt orientiert die Darstellung ihrer Bestände am CIDOC Conceptual Reference Model (CRM), das sich hinsichtlich der Beschreibung heterogener Museumsobjekte, also Artefakten künstlerischer und intellektueller Gestaltung, als hilfreich erwiesen hat. In gegenseitigem Austausch zwischen IFLA und ICOM wurde FRBR mit CRM harmonisiert. Das Ergebnis, FRBRoo (objektorientiertes FRBR), zeigt seine Vorzüge zum einen in einer strengeren Interpretation der Entitäten der Gruppe 1 des FRBR-Modells und zum anderen in einer genaueren Abbildung von Prozessen bzw. Ereignissen. Beispiele zum Anwendungsbezug von FRBRoo zeigen dessen Zugewinn für die wissenschaftliche Erschließung hand-, druck- und online-schriftlicher Quellen, Werken der Darstellenden Kunst, Landkarten und Musikalien innerhalb einer CRM-basierten Datenbank.
  16. Melgar Estrada, L.M.: Topic maps from a knowledge organization perspective (2011) 0.02
    0.017110925 = product of:
      0.05133277 = sum of:
        0.05133277 = weight(_text_:bibliographic in 4298) [ClassicSimilarity], result of:
          0.05133277 = score(doc=4298,freq=2.0), product of:
            0.19890657 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.051092815 = queryNorm
            0.2580748 = fieldWeight in 4298, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=4298)
      0.33333334 = coord(1/3)
    
    Abstract
    This article comprises a literature review and conceptual analysis of Topic Maps-the ISO standard for representing information about the structure of information resources-according to the principles of Knowledge Organization (KO). Using the main principles from this discipline, the study shows how Topic Maps is proposed as an ontology model independent of technology. Topic Maps constitutes a 'bibliographic' meta-language able to represent, extend, and integrate almost all existing Knowledge Organization Systems (KOS) in a standards-based generic model applicable to digital content and to the Web. This report also presents an inventory of the current applications of Topic Maps in Libraries, Archives, and Museums (LAM), as well as in the Digital Humanities. Finally, some directions for further research are suggested, which relate Topic Maps to the main research trends in KO.
  17. Campbell, D.G.: Farradane's relational indexing and its relationship to hyperlinking in Alzheimer's information (2012) 0.02
    0.017110925 = product of:
      0.05133277 = sum of:
        0.05133277 = weight(_text_:bibliographic in 847) [ClassicSimilarity], result of:
          0.05133277 = score(doc=847,freq=2.0), product of:
            0.19890657 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.051092815 = queryNorm
            0.2580748 = fieldWeight in 847, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=847)
      0.33333334 = coord(1/3)
    
    Abstract
    In an ongoing investigation of the relationship between Jason Farradane's relational indexing principles and concept combination in Web-based information on Alzheimer's Disease, the hyperlinks of three consumer health information websites are examined to see how well the linking relationships map to Farradane's relational operators, as well as to the linking attributes in HTML 5. The links were found to be largely bibliographic in nature, and as such mapped well onto HTML 5. Farradane's operators were less effective at capturing the individual links; nonetheless, the two dimensions of his relational matrix-association and discrimination-reveal a crucial underlying strategy of the emotionally-charged mediation between complex information and users who are consulting it under severe stress.
  18. Zhang, L.: Linking information through function (2014) 0.02
    0.017110925 = product of:
      0.05133277 = sum of:
        0.05133277 = weight(_text_:bibliographic in 1526) [ClassicSimilarity], result of:
          0.05133277 = score(doc=1526,freq=2.0), product of:
            0.19890657 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.051092815 = queryNorm
            0.2580748 = fieldWeight in 1526, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=1526)
      0.33333334 = coord(1/3)
    
    Abstract
    How information resources can be meaningfully related has been addressed in contexts from bibliographic entries to hyperlinks and, more recently, linked data. The genre structure and relationships among genre structure constituents shed new light on organizing information by purpose or function. This study examines the relationships among a set of functional units previously constructed in a taxonomy, each of which is a chunk of information embedded in a document and is distinct in terms of its communicative function. Through a card-sort study, relationships among functional units were identified with regard to their occurrence and function. The findings suggest that a group of functional units can be identified, collocated, and navigated by particular relationships. Understanding how functional units are related to each other is significant in linking information pieces in documents to support finding, aggregating, and navigating information in a distributed information environment.
  19. Wilson, T.: ¬The strict faceted classification model (2006) 0.02
    0.016527727 = product of:
      0.049583178 = sum of:
        0.049583178 = product of:
          0.099166356 = sum of:
            0.099166356 = weight(_text_:classification in 2836) [ClassicSimilarity], result of:
              0.099166356 = score(doc=2836,freq=6.0), product of:
                0.16271563 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.051092815 = queryNorm
                0.6094458 = fieldWeight in 2836, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2836)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Faceted classification, at its core, implies orthogonality - that every facet axis exists at right angles to (i.e., independently of) every other facet axis. That's why a faceted classification is sometimes represented with a chart. This set of desserts has been classified by their confection types and, orthogonally, by their flavors.
  20. Zeng, M.L.; Panzer, M.; Salaba, A.: Expressing classification schemes with OWL 2 Web Ontology Language : exploring issues and opportunities based on experiments using OWL 2 for three classification schemes 0.02
    0.015267659 = product of:
      0.045802977 = sum of:
        0.045802977 = product of:
          0.091605954 = sum of:
            0.091605954 = weight(_text_:classification in 3130) [ClassicSimilarity], result of:
              0.091605954 = score(doc=3130,freq=8.0), product of:
                0.16271563 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.051092815 = queryNorm
                0.5629819 = fieldWeight in 3130, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3130)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Based on the research on three general classification schemes, this paper discusses issues encountered when expressing classification schemes in SKOS and explores opportunities of resolving major issues using OWL 2 Web Ontology Language.

Authors

Years

Languages

  • e 132
  • d 13
  • pt 1
  • sp 1
  • More… Less…

Types

  • a 109
  • el 31
  • m 10
  • x 7
  • n 4
  • p 3
  • r 1
  • s 1
  • More… Less…