Search (18 results, page 1 of 1)

  • × theme_ss:"Wissensrepräsentation"
  • × type_ss:"a"
  • × year_i:[2000 TO 2010}
  1. Miles, A.; Pérez-Agüera, J.R.: SKOS: Simple Knowledge Organisation for the Web (2006) 0.02
    0.02359629 = product of:
      0.04719258 = sum of:
        0.04719258 = product of:
          0.09438516 = sum of:
            0.09438516 = weight(_text_:e.g in 504) [ClassicSimilarity], result of:
              0.09438516 = score(doc=504,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.40346956 = fieldWeight in 504, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=504)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article introduces the Simple Knowledge Organisation System (SKOS), a Semantic Web language for representing controlled structured vocabularies, including thesauri, classification schemes, subject heading systems and taxonomies. SKOS provides a framework for publishing thesauri, classification schemes, and subject indexes on the Web, and for applying these systems to resource collections that are part of the SemanticWeb. SemanticWeb applications may harvest and merge SKOS data, to integrate and enhances retrieval service across multiple collections (e.g. libraries). This article also describes some alternatives for integrating Semantic Web services based on the Resource Description Framework (RDF) and SKOS into a distributed enterprise architecture.
  2. Green, R.: WordNet (2009) 0.02
    0.02359629 = product of:
      0.04719258 = sum of:
        0.04719258 = product of:
          0.09438516 = sum of:
            0.09438516 = weight(_text_:e.g in 4696) [ClassicSimilarity], result of:
              0.09438516 = score(doc=4696,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.40346956 = fieldWeight in 4696, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4696)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    WordNet, a lexical database for English, is organized around semantic and lexical relationships between synsets, concepts represented by sets of synonymous word senses. Offering reasonably comprehensive coverage of the nouns, verbs, adjectives, and adverbs of general English, WordNet is a widely used resource for dealing with the ambiguity that arises from homonymy, polysemy, and synonymy. WordNet is used in many information-related tasks and applications (e.g., word sense disambiguation, semantic similarity, lexical chaining, alignment of parallel corpora, text segmentation, sentiment and subjectivity analysis, text classification, information retrieval, text summarization, question answering, information extraction, and machine translation).
  3. Widhalm, R.; Mueck, T.A.: Merging topics in well-formed XML topic maps (2003) 0.02
    0.020225393 = product of:
      0.040450785 = sum of:
        0.040450785 = product of:
          0.08090157 = sum of:
            0.08090157 = weight(_text_:e.g in 2186) [ClassicSimilarity], result of:
              0.08090157 = score(doc=2186,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.34583107 = fieldWeight in 2186, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2186)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Topic Maps are a standardized modelling approach for the semantic annotation and description of WWW resources. They enable an improved search and navigational access on information objects stored in semi-structured information spaces like the WWW. However, the according standards ISO 13250 and XTM (XML Topic Maps) lack formal semantics, several questions concerning e.g. subclassing, inheritance or merging of topics are left open. The proposed TMUML meta model, directly derived from the well known UML meta model, is a meta model for Topic Maps which enables semantic constraints to be formulated in OCL (object constraint language) in order to answer such open questions and overcome possible inconsistencies in Topic Map repositories. We will examine the XTM merging conditions and show, in several examples, how the TMUML meta model enables semantic constraints for Topic Map merging to be formulated in OCL. Finally, we will show how the TM validation process, i.e., checking if a Topic Map is well formed, includes our merging conditions.
  4. Fluit, C.; Horst, H. ter; Meer, J. van der; Sabou, M.; Mika, P.: Spectacle (2004) 0.02
    0.020225393 = product of:
      0.040450785 = sum of:
        0.040450785 = product of:
          0.08090157 = sum of:
            0.08090157 = weight(_text_:e.g in 4337) [ClassicSimilarity], result of:
              0.08090157 = score(doc=4337,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.34583107 = fieldWeight in 4337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4337)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Many Semantic Web initiatives improve the capabilities of machines to exchange the meaning of information with other machines. These efforts lead to an increased quality of the application's results, but their user interfaces take little or no advantage of the semantic richness. For example, an ontology-based search engine will use its ontology when evaluating the user's query (e.g. for query formulation, disambiguation or evaluation), but fails to use it to significantly enrich the presentation of the results to a human user. For example, one could imagine replacing the endless list of hits with a structured presentation based on the semantic properties of the hits. Another problem is that the modelling of a domain is done from a single perspective (most often that of the information provider). Therefore, presentation based on the resulting ontology is unlikely to satisfy the needs of all the different types of users of the information. So even assuming an ontology for the domain is in place, mapping that ontology to the needs of individual users - based on their tasks, expertise and personal preferences - is not trivial.
  5. Breslin, J.G.: Social semantic information spaces (2009) 0.02
    0.016854495 = product of:
      0.03370899 = sum of:
        0.03370899 = product of:
          0.06741798 = sum of:
            0.06741798 = weight(_text_:e.g in 3377) [ClassicSimilarity], result of:
              0.06741798 = score(doc=3377,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.28819257 = fieldWeight in 3377, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3377)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The structural and syntactic web put in place in the early 90s is still much the same as what we use today: resources (web pages, files, etc.) connected by untyped hyperlinks. By untyped, we mean that there is no easy way for a computer to figure out what a link between two pages means - for example, on the W3C website, there are hundreds of links to the various organisations that are registered members of the association, but there is nothing explicitly saying that the link is to an organisation that is a "member of" the W3C or what type of organisation is represented by the link. On John's work page, he links to many papers he has written, but it does not explicitly say that he is the author of those papers or that he wrote such-and-such when he was working at a particular university. In fact, the Web was envisaged to be much more, as one can see from the image in Fig. 1 which is taken from Tim Berners Lee's original outline for the Web in 1989, entitled "Information Management: A Proposal". In this, all the resources are connected by links describing the type of relationships, e.g. "wrote", "describe", "refers to", etc. This is a precursor to the Semantic Web which we will come back to later.
  6. Haslhofer, B.; Knezevié, P.: ¬The BRICKS digital library infrastructure (2009) 0.02
    0.016854495 = product of:
      0.03370899 = sum of:
        0.03370899 = product of:
          0.06741798 = sum of:
            0.06741798 = weight(_text_:e.g in 3384) [ClassicSimilarity], result of:
              0.06741798 = score(doc=3384,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.28819257 = fieldWeight in 3384, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3384)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Service-oriented architectures, and the wider acceptance of decentralized peer-to-peer architectures enable the transition from integrated, centrally controlled systems to federated and dynamic configurable systems. The benefits for the individual service providers and users are robustness of the system, independence of central authorities and flexibility in the usage of services. This chapter provides details of the European project BRICKS, which aims at enabling integrated access to distributed resources in the Cultural Heritage domain. The target audience is broad and heterogeneous and involves cultural heritage and educational institutions, the research community, industry, and the general public. The project idea is motivated by the fact that the amount of digital information and digitized content is continuously increasing but still much effort has to be expended to discover and access it. The reasons for such a situation are heterogeneous data formats, restricted access, proprietary access interfaces, etc. Typical usage scenarios are integrated queries among several knowledge resource, e.g. to discover all Italian artifacts from the Renaissance in European museums. Another example is to follow the life cycle of historic documents, whose physical copies are distributed all over Europe. A standard method for integrated access is to place all available content and metadata in a central place. Unfortunately, such a solution requires a quite powerful and costly infrastructure if the volume of data is large. Considerations of cost optimization are highly important for Cultural Heritage institutions, especially if they are funded from public money. Therefore, better usage of the existing resources, i.e. a decentralized/P2P approach promises to deliver a significantly less costly system,and does not mean sacrificing too much on the performance side.
  7. Sure, Y.; Erdmann, M.; Studer, R.: OntoEdit: collaborative engineering of ontologies (2004) 0.01
    0.013483594 = product of:
      0.026967188 = sum of:
        0.026967188 = product of:
          0.053934377 = sum of:
            0.053934377 = weight(_text_:e.g in 4405) [ClassicSimilarity], result of:
              0.053934377 = score(doc=4405,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.23055404 = fieldWeight in 4405, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4405)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Developing ontologies is central to our vision of Semantic Web-based knowledge management. The methodology described in Chapter 3 guides the development of ontologies for different applications. However, because of the size of ontologies, their complexity, their formal underpinnings and the necessity to come towards a shared understanding within a group of people when defining an ontology, ontology construction is still far from being a well-understood process. Concerning the methodology, OntoEdit focuses on three of the main steps for ontology development (the methodology is described in Chapter 3), viz. the kick off, refinement, and evaluation. We describe the steps supported by OntoEdit and focus on collaborative aspects that occur during each of the step. First, all requirements of the envisaged ontology are collected during the kick off phase. Typically for ontology engineering, ontology engineers and domain experts are joined in a team that works together on a description of the domain and the goal of the ontology, design guidelines, available knowledge sources (e.g. re-usable ontologies and thesauri, etc.), potential users and use cases and applications supported by the ontology. The output of this phase is a semiformal description of the ontology. Second, during the refinement phase, the team extends the semi-formal description in several iterations and formalizes it in an appropriate representation language like RDF(S) or, more advanced, DAML1OIL. The output of this phase is a mature ontology (the 'target ontology'). Third, the target ontology needs to be evaluated according to the requirement specifications. Typically this phase serves as a proof for the usefulness of ontologies (and ontology-based applications) and may involve the engineering team as well as end users of the targeted application. The output of this phase is an evaluated ontology, ready for roll-out into a productive environment. Support for these collaborative development steps within the ontology development methodology is crucial in order to meet the conflicting needs for ease of use and construction of complex ontology structures. We now illustrate OntoEdit's support for each of the supported steps. The examples shown are taken from the Swiss Life case study on skills management (cf. Chapter 12).
  8. Synak, M.; Dabrowski, M.; Kruk, S.R.: Semantic Web and ontologies (2009) 0.01
    0.01215095 = product of:
      0.0243019 = sum of:
        0.0243019 = product of:
          0.0486038 = sum of:
            0.0486038 = weight(_text_:22 in 3376) [ClassicSimilarity], result of:
              0.0486038 = score(doc=3376,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.30952093 = fieldWeight in 3376, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3376)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.2010 16:58:22
  9. Priss, U.: Faceted information representation (2000) 0.01
    0.010632081 = product of:
      0.021264162 = sum of:
        0.021264162 = product of:
          0.042528324 = sum of:
            0.042528324 = weight(_text_:22 in 5095) [ClassicSimilarity], result of:
              0.042528324 = score(doc=5095,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.2708308 = fieldWeight in 5095, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5095)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2016 17:47:06
  10. Knorz, G.; Rein, B.: Semantische Suche in einer Hochschulontologie (2005) 0.01
    0.010632081 = product of:
      0.021264162 = sum of:
        0.021264162 = product of:
          0.042528324 = sum of:
            0.042528324 = weight(_text_:22 in 1852) [ClassicSimilarity], result of:
              0.042528324 = score(doc=1852,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.2708308 = fieldWeight in 1852, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1852)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    11. 2.2011 18:22:58
  11. Khoo, S.G.; Na, J.-C.: Semantic relations in information science (2006) 0.01
    0.010112696 = product of:
      0.020225393 = sum of:
        0.020225393 = product of:
          0.040450785 = sum of:
            0.040450785 = weight(_text_:e.g in 1978) [ClassicSimilarity], result of:
              0.040450785 = score(doc=1978,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.17291553 = fieldWeight in 1978, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1978)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Linguists in the structuralist tradition (e.g., Lyons, 1977; Saussure, 1959) have asserted that concepts cannot be defined on their own but only in relation to other concepts. Semantic relations appear to reflect a logical structure in the fundamental nature of thought (Caplan & Herrmann, 1993). Green, Bean, and Myaeng (2002) noted that semantic relations play a critical role in how we represent knowledge psychologically, linguistically, and computationally, and that many systems of knowledge representation start with a basic distinction between entities and relations. Green (2001, p. 3) said that "relationships are involved as we combine simple entities to form more complex entities, as we compare entities, as we group entities, as one entity performs a process on another entity, and so forth. Indeed, many things that we might initially regard as basic and elemental are revealed upon further examination to involve internal structure, or in other words, internal relationships." Concepts and relations are often expressed in language and text. Language is used not just for communicating concepts and relations, but also for representing, storing, and reasoning with concepts and relations. We shall examine the nature of semantic relations from a linguistic and psychological perspective, with an emphasis on relations expressed in text. The usefulness of semantic relations in information science, especially in ontology construction, information extraction, information retrieval, question-answering, and text summarization is discussed. Research and development in information science have focused on concepts and terms, but the focus will increasingly shift to the identification, processing, and management of relations to achieve greater effectiveness and refinement in information science techniques. Previous chapters in ARIST on natural language processing (Chowdhury, 2003), text mining (Trybula, 1999), information retrieval and the philosophy of language (Blair, 2003), and query expansion (Efthimiadis, 1996) provide a background for this discussion, as semantic relations are an important part of these applications.
  12. Gendt, M. van; Isaac, I.; Meij, L. van der; Schlobach, S.: Semantic Web techniques for multiple views on heterogeneous collections : a case study (2006) 0.01
    0.009113212 = product of:
      0.018226424 = sum of:
        0.018226424 = product of:
          0.03645285 = sum of:
            0.03645285 = weight(_text_:22 in 2418) [ClassicSimilarity], result of:
              0.03645285 = score(doc=2418,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.23214069 = fieldWeight in 2418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2418)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Research and advanced technology for digital libraries : 10th European conference, proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006
  13. Renear, A.H.; Wickett, K.M.; Urban, R.J.; Dubin, D.; Shreeves, S.L.: Collection/item metadata relationships (2008) 0.01
    0.009113212 = product of:
      0.018226424 = sum of:
        0.018226424 = product of:
          0.03645285 = sum of:
            0.03645285 = weight(_text_:22 in 2623) [ClassicSimilarity], result of:
              0.03645285 = score(doc=2623,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.23214069 = fieldWeight in 2623, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2623)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  14. Kruk, S.R.; Kruk, E.; Stankiewicz, K.: Evaluation of semantic and social technologies for digital libraries (2009) 0.01
    0.009113212 = product of:
      0.018226424 = sum of:
        0.018226424 = product of:
          0.03645285 = sum of:
            0.03645285 = weight(_text_:22 in 3387) [ClassicSimilarity], result of:
              0.03645285 = score(doc=3387,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.23214069 = fieldWeight in 3387, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3387)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 8.2010 12:35:22
  15. Bittner, T.; Donnelly, M.; Winter, S.: Ontology and semantic interoperability (2006) 0.01
    0.009113212 = product of:
      0.018226424 = sum of:
        0.018226424 = product of:
          0.03645285 = sum of:
            0.03645285 = weight(_text_:22 in 4820) [ClassicSimilarity], result of:
              0.03645285 = score(doc=4820,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.23214069 = fieldWeight in 4820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4820)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    3.12.2016 18:39:22
  16. Beppler, F.D.; Fonseca, F.T.; Pacheco, R.C.S.: Hermeneus: an architecture for an ontology-enabled information retrieval (2008) 0.01
    0.009113212 = product of:
      0.018226424 = sum of:
        0.018226424 = product of:
          0.03645285 = sum of:
            0.03645285 = weight(_text_:22 in 3261) [ClassicSimilarity], result of:
              0.03645285 = score(doc=3261,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.23214069 = fieldWeight in 3261, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3261)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    28.11.2016 12:43:22
  17. Zeng, M.L.; Fan, W.; Lin, X.: SKOS for an integrated vocabulary structure (2008) 0.01
    0.008592019 = product of:
      0.017184038 = sum of:
        0.017184038 = product of:
          0.034368075 = sum of:
            0.034368075 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
              0.034368075 = score(doc=2654,freq=4.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.21886435 = fieldWeight in 2654, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2654)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In order to transfer the Chinese Classified Thesaurus (CCT) into a machine-processable format and provide CCT-based Web services, a pilot study has been conducted in which a variety of selected CCT classes and mapped thesaurus entries are encoded with SKOS. OWL and RDFS are also used to encode the same contents for the purposes of feasibility and cost-benefit comparison. CCT is a collected effort led by the National Library of China. It is an integration of the national standards Chinese Library Classification (CLC) 4th edition and Chinese Thesaurus (CT). As a manually created mapping product, CCT provides for each of the classes the corresponding thesaurus terms, and vice versa. The coverage of CCT includes four major clusters: philosophy, social sciences and humanities, natural sciences and technologies, and general works. There are 22 main-classes, 52,992 sub-classes and divisions, 110,837 preferred thesaurus terms, 35,690 entry terms (non-preferred terms), and 59,738 pre-coordinated headings (Chinese Classified Thesaurus, 2005) Major challenges of encoding this large vocabulary comes from its integrated structure. CCT is a result of the combination of two structures (illustrated in Figure 1): a thesaurus that uses ISO-2788 standardized structure and a classification scheme that is basically enumerative, but provides some flexibility for several kinds of synthetic mechanisms Other challenges include the complex relationships caused by differences of granularities of two original schemes and their presentation with various levels of SKOS elements; as well as the diverse coordination of entries due to the use of auxiliary tables and pre-coordinated headings derived from combining classes, subdivisions, and thesaurus terms, which do not correspond to existing unique identifiers. The poster reports the progress, shares the sample SKOS entries, and summarizes problems identified during the SKOS encoding process. Although OWL Lite and OWL Full provide richer expressiveness, the cost-benefit issues and the final purposes of encoding CCT raise questions of using such approaches.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  18. Dobrev, P.; Kalaydjiev, O.; Angelova, G.: From conceptual structures to semantic interoperability of content (2007) 0.01
    0.0075943437 = product of:
      0.0151886875 = sum of:
        0.0151886875 = product of:
          0.030377375 = sum of:
            0.030377375 = weight(_text_:22 in 4607) [ClassicSimilarity], result of:
              0.030377375 = score(doc=4607,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.19345059 = fieldWeight in 4607, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4607)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Conceptual structures: knowledge architectures for smart applications: 15th International Conference on Conceptual Structures, ICCS 2007, Sheffield, UK, July 22 - 27, 2007 ; proceedings. Eds.: U. Priss u.a