Search (228 results, page 2 of 12)

  • × theme_ss:"Wissensrepräsentation"
  • × year_i:[2000 TO 2010}
  1. Yi, M.: Information organization and retrieval using a topic maps-based ontology : results of a task-based evaluation (2008) 0.00
    0.0032090992 = product of:
      0.0064181983 = sum of:
        0.0064181983 = product of:
          0.012836397 = sum of:
            0.012836397 = weight(_text_:a in 2369) [ClassicSimilarity], result of:
              0.012836397 = score(doc=2369,freq=20.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.24171482 = fieldWeight in 2369, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2369)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    As information becomes richer and more complex, alternative information-organization methods are needed to more effectively and efficiently retrieve information from various systems, including the Web. The objective of this study is to explore how a Topic Maps-based ontology approach affects users' searching performance. Forty participants participated in a task-based evaluation where two dependent variables, recall and search time, were measured. The results of this study indicate that a Topic Maps-based ontology information retrieval (TOIR) system has a significant and positive effect on both recall and search time, compared to a thesaurus-based information retrieval (TIR) system. These results suggest that the inclusion of a Topic Maps-based ontology is a beneficial approach to take when designing information retrieval systems.
    Type
    a
  2. Calegari, S.; Sanchez, E.: Object-fuzzy concept network : an enrichment of ontologies in semantic information retrieval (2008) 0.00
    0.0031642143 = product of:
      0.0063284286 = sum of:
        0.0063284286 = product of:
          0.012656857 = sum of:
            0.012656857 = weight(_text_:a in 2393) [ClassicSimilarity], result of:
              0.012656857 = score(doc=2393,freq=28.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.23833402 = fieldWeight in 2393, product of:
                  5.2915025 = tf(freq=28.0), with freq of:
                    28.0 = termFreq=28.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2393)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article shows how a fuzzy ontology-based approach can improve semantic documents retrieval. After formally defining a fuzzy ontology and a fuzzy knowledge base, a special type of new fuzzy relationship called (semantic) correlation, which links the concepts or entities in a fuzzy ontology, is discussed. These correlations, first assigned by experts, are updated after querying or when a document has been inserted into a database. Moreover, in order to define a dynamic knowledge of a domain adapting itself to the context, it is shown how to handle a tradeoff between the correct definition of an object, taken in the ontology structure, and the actual meaning assigned by individuals. The notion of a fuzzy concept network is extended, incorporating database objects so that entities and documents can similarly be represented in the network. Information retrieval (IR) algorithm, using an object-fuzzy concept network (O-FCN), is introduced and described. This algorithm allows us to derive a unique path among the entities involved in the query to obtain maxima semantic associations in the knowledge domain. Finally, the study has been validated by querying a database using fuzzy recall, fuzzy precision, and coefficient variant measures in the crisp and fuzzy cases.
    Type
    a
  3. Broughton, V.: Facet analysis as a fundamental theory for structuring subject organization tools (2007) 0.00
    0.0030255679 = product of:
      0.0060511357 = sum of:
        0.0060511357 = product of:
          0.012102271 = sum of:
            0.012102271 = weight(_text_:a in 537) [ClassicSimilarity], result of:
              0.012102271 = score(doc=537,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.22789092 = fieldWeight in 537, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=537)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The presentation will examine the potential of facet analysis as a basis for determining status and relationships of concepts in subject based tools using a controlled vocabulary, and the extent to which it can be used as a general theory of knowledge organization as opposed to a methodology for structuring classifications only.
  4. Schutz, A.; Buitelaar, P.: RelExt: a tool for relation extraction from text in ontology extension (2005) 0.00
    0.0029294936 = product of:
      0.005858987 = sum of:
        0.005858987 = product of:
          0.011717974 = sum of:
            0.011717974 = weight(_text_:a in 1078) [ClassicSimilarity], result of:
              0.011717974 = score(doc=1078,freq=24.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.22065444 = fieldWeight in 1078, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1078)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Domain ontologies very rarely model verbs as relations holding between concepts. However, the role of the verb as a central connecting element between concepts is undeniable. Verbs specify the interaction between the participants of some action or event by expressing relations between them. In parallel, it can be argued from an ontology engineering point of view that verbs express a relation between two classes that specify domain and range. The work described here is concerned with relation extraction for ontology extension along these lines. We describe a system (RelExt) that is capable of automatically identifying highly relevant triples (pairs of concepts connected by a relation) over concepts from an existing ontology. RelExt works by extracting relevant verbs and their grammatical arguments (i.e. terms) from a domain-specific text collection and computing corresponding relations through a combination of linguistic and statistical processing. The paper includes a detailed description of the system architecture and evaluation results on a constructed benchmark. RelExt has been developed in the context of the SmartWeb project, which aims at providing intelligent information services via mobile broadband devices on the FIFA World Cup that will be hosted in Germany in 2006. Such services include location based navigational information as well as question answering in the football domain.
    Source
    Semantic Web - ISWC 2005, 4th International Semantic Web Conference, ISWC 2005, Galway, Ireland, November 6-10, 2005, Proceedings. Eds.: Yolanda Gil, Enrico Motta, V. Richard Benjamins, Mark A. Musen
    Type
    a
  5. Loehrlein, A.; Jacob, E.K.; Lee, S.; Yang, K.: Development of heuristics in a hybrid approach to faceted classification (2006) 0.00
    0.0029000505 = product of:
      0.005800101 = sum of:
        0.005800101 = product of:
          0.011600202 = sum of:
            0.011600202 = weight(_text_:a in 247) [ClassicSimilarity], result of:
              0.011600202 = score(doc=247,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.21843673 = fieldWeight in 247, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=247)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper describes work in progress to identify automated methods to complement and streamline the intellectual process in the generation of faceted schemes. It reports on the development of the word pair heuristic, the suffix heuristic, and the WordNet heuristic, and how the three heuristics integrate to produce an initial organization of terms from which a classificationist can more efficiently construct a faceted vocabulary.
    Source
    Knowledge organization for a global learning society: Proceedings of the 9th International ISKO Conference, 4-7 July 2006, Vienna, Austria. Hrsg.: G. Budin, C. Swertz u. K. Mitgutsch
    Type
    a
  6. Panzer, M.: DDC, SKOS, and linked data on the Web (2008) 0.00
    0.0028703054 = product of:
      0.005740611 = sum of:
        0.005740611 = product of:
          0.011481222 = sum of:
            0.011481222 = weight(_text_:a in 4478) [ClassicSimilarity], result of:
              0.011481222 = score(doc=4478,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2161963 = fieldWeight in 4478, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4478)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Everything need not be miscellaneous: controlled vocabularies and classification in a Web world, OCLC/ISKO-NA Preconference Workshop,10th International ISKO Conference, Montreal, Canada, August 5-8, 2008
    Type
    a
  7. Suchanek, F.M.; Kasneci, G.; Weikum, G.: YAGO: a large ontology from Wikipedia and WordNet (2008) 0.00
    0.0028703054 = product of:
      0.005740611 = sum of:
        0.005740611 = product of:
          0.011481222 = sum of:
            0.011481222 = weight(_text_:a in 3404) [ClassicSimilarity], result of:
              0.011481222 = score(doc=3404,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2161963 = fieldWeight in 3404, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3404)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article presents YAGO, a large ontology with high coverage and precision. YAGO has been automatically derived from Wikipedia and WordNet. It comprises entities and relations, and currently contains more than 1.7 million entities and 15 million facts. These include the taxonomic Is-A hierarchy as well as semantic relations between entities. The facts for YAGO have been extracted from the category system and the infoboxes of Wikipedia and have been combined with taxonomic relations from WordNet. Type checking techniques help us keep YAGO's precision at 95%-as proven by an extensive evaluation study. YAGO is based on a clean logical model with a decidable consistency. Furthermore, it allows representing n-ary relations in a natural way while maintaining compatibility with RDFS. A powerful query model facilitates access to YAGO's data.
    Type
    a
  8. Drexel, G.: Knowledge engineering for intelligent information retrieval (2001) 0.00
    0.0028703054 = product of:
      0.005740611 = sum of:
        0.005740611 = product of:
          0.011481222 = sum of:
            0.011481222 = weight(_text_:a in 4043) [ClassicSimilarity], result of:
              0.011481222 = score(doc=4043,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2161963 = fieldWeight in 4043, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4043)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper presents a clustered approach to designing an overall ontological model together with a general rule-based component that serves as a mapping device. By observational criteria, a multi-lingual team of experts excerpts concepts from general communication in the media. The team, then, finds equivalent expressions in English, German, French, and Spanish. On the basis of a set of ontological and lexical relations, a conceptual network is built up. Concepts are thought to be universal. Objects unique in time and space are identified by names and will be explained by the universals as their instances. Our approach relies on multi-relational descriptions of concepts. It provides a powerful tool for documentation and conceptual language learning. First and foremost, our multi-lingual, polyhierarchical ontology fills the gap of semantically-based information retrieval by generating enhanced and improved queries for internet search
    Type
    a
  9. Prieto-Díaz, R.: ¬A faceted approach to building ontologies (2002) 0.00
    0.0028703054 = product of:
      0.005740611 = sum of:
        0.005740611 = product of:
          0.011481222 = sum of:
            0.011481222 = weight(_text_:a in 2259) [ClassicSimilarity], result of:
              0.011481222 = score(doc=2259,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2161963 = fieldWeight in 2259, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2259)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    An ontology is "an explicit conceptualization of a domain of discourse, and thus provides a shared and common understanding of the domain." We have been producing ontologies for millennia to understand and explain our rationale and environment. From Plato's philosophical framework to modern day classification systems, ontologies are, in most cases, the product of extensive analysis and categorization. Only recently has the process of building ontologies become a research topic of interest. Today, ontologies are built very much ad-hoc. A terminology is first developed providing a controlled vocabulary for the subject area or domain of interest, then it is organized into a taxonomy where key concepts are identified, and finally these concepts are defined and related to create an ontology. The intent of this paper is to show that domain analysis methods can be used for building ontologies. Domain analysis aims at generic models that represent groups of similar systems within an application domain. In this sense, it deals with categorization of common objects and operations, with clear, unambiguous definitions of them and with defining their relationships.
    Type
    a
  10. Hepp, M.; Bruijn, J. de: GenTax : a generic methodology for deriving OWL and RDF-S ontologies from hierarchical classifications, thesauri, and inconsistent taxonomies (2007) 0.00
    0.0028047764 = product of:
      0.005609553 = sum of:
        0.005609553 = product of:
          0.011219106 = sum of:
            0.011219106 = weight(_text_:a in 4692) [ClassicSimilarity], result of:
              0.011219106 = score(doc=4692,freq=22.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.21126054 = fieldWeight in 4692, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4692)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Hierarchical classifications, thesauri, and informal taxonomies are likely the most valuable input for creating, at reasonable cost, non-toy ontologies in many domains. They contain, readily available, a wealth of category definitions plus a hierarchy, and they reflect some degree of community consensus. However, their transformation into useful ontologies is not as straightforward as it appears. In this paper, we show that (1) it often depends on the context of usage whether an informal hierarchical categorization schema is a classification, a thesaurus, or a taxonomy, and (2) present a novel methodology for automatically deriving consistent RDF-S and OWL ontologies from such schemas. Finally, we (3) demonstrate the usefulness of this approach by transforming the two e-business categorization standards eCl@ss and UNSPSC into ontologies that overcome the limitations of earlier prototypes. Our approach allows for the script-based creation of meaningful ontology classes for a particular context while preserving the original hierarchy, even if the latter is not a real subsumption hierarchy in this particular context. Human intervention in the transformation is limited to checking some conceptual properties and identifying frequent anomalies, and the only input required is an informal categorization plus a notion of the target context. In particular, the approach does not require instance data, as ontology learning approaches would usually do.
    Type
    a
  11. Ibekwe-SanJuan, F.: Constructing and maintaining knowledge organization tools : a symbolic approach (2006) 0.00
    0.00270615 = product of:
      0.0054123 = sum of:
        0.0054123 = product of:
          0.0108246 = sum of:
            0.0108246 = weight(_text_:a in 5595) [ClassicSimilarity], result of:
              0.0108246 = score(doc=5595,freq=32.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20383182 = fieldWeight in 5595, product of:
                  5.656854 = tf(freq=32.0), with freq of:
                    32.0 = termFreq=32.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5595)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - To propose a comprehensive and semi-automatic method for constructing or updating knowledge organization tools such as thesauri. Design/methodology/approach - The paper proposes a comprehensive methodology for thesaurus construction and maintenance combining shallow NLP with a clustering algorithm and an information visualization interface. The resulting system TermWatch, extracts terms from a text collection, mines semantic relations between them using complementary linguistic approaches and clusters terms using these semantic relations. The clusters are mapped onto a 2D using an integrated visualization tool. Findings - The clusters formed exhibit the different relations necessary to populate a thesaurus or ontology: synonymy, generic/specific and relatedness. The clusters represent, for a given term, its closest neighbours in terms of semantic relations. Practical implications - This could change the way in which information professionals (librarians and documentalists) undertake knowledge organization tasks. TermWatch can be useful either as a starting point for grasping the conceptual organization of knowledge in a huge text collection without having to read the texts, then actually serving as a suggestive tool for populating different hierarchies of a thesaurus or an ontology because its clusters are based on semantic relations. Originality/value - This lies in several points: combined use of linguistic relations with an adapted clustering algorithm, which is scalable and can handle sparse data. The paper proposes a comprehensive approach to semantic relations acquisition whereas existing studies often use one or two approaches. The domain knowledge maps produced by the system represents an added advantage over existing approaches to automatic thesaurus construction in that clusters are formed using semantic relations between domain terms. Thus while offering a meaningful synthesis of the information contained in the original corpus through clustering, the results can be used for knowledge organization tasks (thesaurus building and ontology population) The system also constitutes a platform for performing several knowledge-oriented tasks like science and technology watch, textmining, query refinement.
    Type
    a
  12. Grzonkowski, S.; Kruk, S.R.; Gzella, A.; Demczuk, J.; McDaniel, B.: Community-aware ontologies (2009) 0.00
    0.00270615 = product of:
      0.0054123 = sum of:
        0.0054123 = product of:
          0.0108246 = sum of:
            0.0108246 = weight(_text_:a in 3382) [ClassicSimilarity], result of:
              0.0108246 = score(doc=3382,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20383182 = fieldWeight in 3382, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3382)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The term "social network" was first mentioned in 1954 by J.A. Barnes. The social network is a structure that consists of nodes; the nodes represent individual people or organizations. Such a structure depicts the ways in which people are connected through diverse social familiarities like acquaintance, friendship or close familiar bonds.
    Type
    a
  13. Green, R.: Relationships in the Dewey Decimal Classification (DDC) : plan of study (2008) 0.00
    0.00270615 = product of:
      0.0054123 = sum of:
        0.0054123 = product of:
          0.0108246 = sum of:
            0.0108246 = weight(_text_:a in 3397) [ClassicSimilarity], result of:
              0.0108246 = score(doc=3397,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20383182 = fieldWeight in 3397, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3397)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    EPC Exhibit 129-36.1 presented intermediate results of a project to connect Relative Index terms to topics associated with classes and to determine if those Relative Index terms approximated the whole of the corresponding class or were in standing room in the class. The Relative Index project constitutes the first stage of a long(er)-term project to instill a more systematic treatment of relationships within the DDC. The present exhibit sets out a plan of study for that long-term project.
  14. Schwarz, K.: Domain model enhanced search : a comparison of taxonomy, thesaurus and ontology (2005) 0.00
    0.00270615 = product of:
      0.0054123 = sum of:
        0.0054123 = product of:
          0.0108246 = sum of:
            0.0108246 = weight(_text_:a in 4569) [ClassicSimilarity], result of:
              0.0108246 = score(doc=4569,freq=32.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20383182 = fieldWeight in 4569, product of:
                  5.656854 = tf(freq=32.0), with freq of:
                    32.0 = termFreq=32.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4569)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The results of this thesis are intended to support the information architect in designing a solution for improved search in a corporate environment. Specifically we have examined the type of search problems that require a domain model to enhance the search process. There are several approaches to modeling a domain. We have considered 3 different types of domain modeling schemes; taxonomy, thesaurus and ontology. The intention is to support the information architect in making an informed choice between one or more of these schemes. In our opinion the main criteria for this choice are the modeling characteristics of a scheme and the suitability for application in the search process. The second chapter is a discussion of modeling characteristics of each scheme, followed by a comparison between them. This should give an information architect an idea of which aspects of a domain can be modeled with each scheme. What is missing here is an indication of the effort required to model a domain with each scheme. There are too many factors that influence the amount of required effort, ranging from measurable factors like domain size and resource characteristics to cultural matters such as the willingness to share knowledge and the existence of a project champion in the team to keep the project running. The third chapter shows what role domain models can play in each part of the search process. This gives an idea of the problems that domain models can solve. We have split the search process into individual parts to show that domain models can be applied very differently in the process. The fourth chapter makes recommendations about the suitability of each individualdomain modeling scheme for improving search. Each scheme has particular characteristics that make it especially suitable for a domain or a search problem. In the appendix each case study is described in detail. These descriptions are intended to serve as a benchmark. The current problem of the enterprise can be compared to those described to see which case study is most similar, which solution was chosen, which problems arose and how they were dealt with. An important issue that we have not touched upon in this thesis is that of maintenance. The real problems of a domain model are revealed when it is applied in a search system and its deficits and wrong assumptions become clear. Adaptation and maintenance are always required. Unfortunately we have not been able to glean sufficient information about maintenance issues from our case studies to draw any meaningful conclusions.
  15. Fonseca, F.: ¬The double role of ontologies in information science research (2007) 0.00
    0.0026849252 = product of:
      0.0053698504 = sum of:
        0.0053698504 = product of:
          0.010739701 = sum of:
            0.010739701 = weight(_text_:a in 277) [ClassicSimilarity], result of:
              0.010739701 = score(doc=277,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20223314 = fieldWeight in 277, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=277)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In philosophy, Ontology is the basic description of things in the world. In information science, an ontology refers to an engineering artifact, constituted by a specific vocabulary used to describe a certain reality. Ontologies have been proposed for validating both conceptual models and conceptual schemas; however, these roles are quite dissimilar. In this article, we show that ontologies can be better understood if we classify the different uses of the term as it appears in the literature. First, we explain Ontology (upper case O) as used in Philosophy. Then, we propose a differentiation between ontologies of information systems and ontologies for information systems. All three concepts have an important role in information science. We clarify the different meanings and uses of Ontology and ontologies through a comparison of research by Wand and Weber and by Guarino in ontology-driven information systems. The contributions of this article are twofold: (a) It provides a better understanding of what ontologies are, and (b) it explains the double role of ontologies in information science research.
    Type
    a
  16. Sure, Y.; Studer, R.: ¬A methodology for ontology-based knowledge management (2004) 0.00
    0.0026849252 = product of:
      0.0053698504 = sum of:
        0.0053698504 = product of:
          0.010739701 = sum of:
            0.010739701 = weight(_text_:a in 4400) [ClassicSimilarity], result of:
              0.010739701 = score(doc=4400,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20223314 = fieldWeight in 4400, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4400)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Ontologies are a core element of the knowledge management architecture described in Chapter 1. In this chapter we describe a methodology for application driven ontology development, covering the whole project lifecycle from the kick off phase to the maintenance phase. Existing methodologies and practical ontology development experiences have in common that they start from the identification of the purpose of the ontology and the need for domain knowledge acquisition. They differ in their foci and following steps to be taken. In our approach of the ontology development process, we integrate aspects from existing methodologies and lessons learned from practical experience (as described in the Section 3.7). We put ontology development into a wider organizational context by performing an a priori feasibility study. The feasibility study is based on CommonKADS. We modified certain aspects of CommonKADS for a tight integration of the feasibility study into our methodology.
    Type
    a
  17. Assem, M. van; Malaisé, V.; Miles, A.; Schreiber, G.: ¬A method to convert thesauri to SKOS (2006) 0.00
    0.0026849252 = product of:
      0.0053698504 = sum of:
        0.0053698504 = product of:
          0.010739701 = sum of:
            0.010739701 = weight(_text_:a in 4642) [ClassicSimilarity], result of:
              0.010739701 = score(doc=4642,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20223314 = fieldWeight in 4642, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4642)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Thesauri can be useful resources for indexing and retrieval on the Semantic Web, but often they are not published in RDF/OWL. To convert thesauri to RDF for use in Semantic Web applications and to ensure the quality and utility of the conversion a structured method is required. Moreover, if different thesauri are to be interoperable without complicated mappings, a standard schema for thesauri is required. This paper presents a method for conversion of thesauri to the SKOS RDF/OWL schema, which is a proposal for such a standard under development by W3Cs Semantic Web Best Practices Working Group. We apply the method to three thesauri: IPSV, GTAA and MeSH. With these case studies we evaluate our method and the applicability of SKOS for representing thesauri.
  18. Tzitzikas, Y.; Spyratos, N.; Constantopoulos, P.; Analyti, A.: Extended faceted ontologies (2002) 0.00
    0.0026849252 = product of:
      0.0053698504 = sum of:
        0.0053698504 = product of:
          0.010739701 = sum of:
            0.010739701 = weight(_text_:a in 2280) [ClassicSimilarity], result of:
              0.010739701 = score(doc=2280,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20223314 = fieldWeight in 2280, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2280)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A faceted ontology consists of a set of facets, where each facet consists of a predefined set of terms structured by a subsumption relation. We propose two extensions of faceted ontologies, which allow inferring conjunctions of terms that are valid in the underlying domain. We give a model-theoretic interpretation to these extended faceted ontologies and we provide mechanisms for inferring the valid conjunctions of terms. This inference service can be exploited for preventing errors during the indexing process and for deriving navigation trees that are suitable for browsing. The proposed scheme has several advantages by comparison to the hierarchical classification schemes that are currently used, namely: conceptual clarity: it is easier to understand, compactness: it takes less space, and scalability: the update operations can be formulated easier and be performed more efficiently.
    Type
    a
  19. Riva, P.; Doerr, M.; Zumer, M.: FRBRoo: enabling a common view of information from memory institutions (2008) 0.00
    0.0026742492 = product of:
      0.0053484985 = sum of:
        0.0053484985 = product of:
          0.010696997 = sum of:
            0.010696997 = weight(_text_:a in 3743) [ClassicSimilarity], result of:
              0.010696997 = score(doc=3743,freq=20.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20142901 = fieldWeight in 3743, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3743)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In 2008 the FRBR/CRM Harmonisation Working Group has achieved a major milestone: a complete version of the object-oriented definition of FRBR (FRBRoo) was released for comment. After a brief overview of the history and context of the Working Group, this paper focuses on the primary contributions resulting from this work. - FRBRoo is a self-contained document which expresses the concepts of FRBR using the objectoriented methodology and framework of CIDOC CRM. It is an alternative view on library conceptualisation for a different purpose, not a replacement for FRBR. - This 'translation' process presented an opportunity to verify and confirm FRBR's internal consistency. - FRBRoo offers a common view of library and museum documentation as two kinds of information from memory institutions. Such a common view is necessary to provide interoperable information systems for all users interested in accessing common or related content. - The analysis provided an opportunity for mutual enrichment of FRBR and CIDOC CRM. Examples include: - - Addition of the modelling of time and events to FRBR, which can be seen in its application to the publishing process - - Clarification of the manifestation entity - - Explicit modelling of performances and recordings in FRBR - - Adding the work entity to CRM - - Adding the identifier assignment process to CRM. - Producing a formalisation which is more suited for implementation with object-oriented tools, and which facilitates the testing and adoption of FRBR concepts in implementations with different functional specifications and in different environments.
  20. Rindflesch, T.C.; Fizsman, M.: The interaction of domain knowledge and linguistic structure in natural language processing : interpreting hypernymic propositions in biomedical text (2003) 0.00
    0.0026742492 = product of:
      0.0053484985 = sum of:
        0.0053484985 = product of:
          0.010696997 = sum of:
            0.010696997 = weight(_text_:a in 2097) [ClassicSimilarity], result of:
              0.010696997 = score(doc=2097,freq=20.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20142901 = fieldWeight in 2097, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2097)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Interpretation of semantic propositions in free-text documents such as MEDLINE citations would provide valuable support for biomedical applications, and several approaches to semantic interpretation are being pursued in the biomedical informatics community. In this paper, we describe a methodology for interpreting linguistic structures that encode hypernymic propositions, in which a more specific concept is in a taxonomic relationship with a more general concept. In order to effectively process these constructions, we exploit underspecified syntactic analysis and structured domain knowledge from the Unified Medical Language System (UMLS). After introducing the syntactic processing on which our system depends, we focus on the UMLS knowledge that supports interpretation of hypernymic propositions. We first use semantic groups from the Semantic Network to ensure that the two concepts involved are compatible; hierarchical information in the Metathesaurus then determines which concept is more general and which more specific. A preliminary evaluation of a sample based on the semantic group Chemicals and Drugs provides 83% precision. An error analysis was conducted and potential solutions to the problems encountered are presented. The research discussed here serves as a paradigm for investigating the interaction between domain knowledge and linguistic structure in natural language processing, and could also make a contribution to research on automatic processing of discourse structure. Additional implications of the system we present include its integration in advanced semantic interpretation processors for biomedical text and its use for information extraction in specific domains. The approach has the potential to support a range of applications, including information retrieval and ontology engineering.
    Type
    a

Languages

  • e 172
  • d 51
  • el 1
  • More… Less…

Types

  • a 153
  • el 77
  • n 12
  • x 9
  • m 7
  • s 5
  • r 2
  • p 1
  • More… Less…

Subjects