Search (93 results, page 2 of 5)

  • × language_ss:"e"
  • × theme_ss:"Wissensrepräsentation"
  1. Hollink, L.; Assem, M. van; Wang, S.; Isaac, A.; Schreiber, G.: Two variations on ontology alignment evaluation : methodological issues (2008) 0.01
    0.012794068 = product of:
      0.051176272 = sum of:
        0.051176272 = weight(_text_:reference in 4645) [ClassicSimilarity], result of:
          0.051176272 = score(doc=4645,freq=2.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.2696973 = fieldWeight in 4645, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.046875 = fieldNorm(doc=4645)
      0.25 = coord(1/4)
    
    Abstract
    Evaluation of ontology alignments is in practice done in two ways: (1) assessing individual correspondences and (2) comparing the alignment to a reference alignment. However, this type of evaluation does not guarantee that an application which uses the alignment will perform well. In this paper, we contribute to the current ontology alignment evaluation practices by proposing two alternative evaluation methods that take into account some characteristics of a usage scenario without doing a full-fledged end-to-end evaluation. We compare different evaluation approaches in three case studies, focussing on methodological issues. Each case study considers an alignment between a different pair of ontologies, ranging from rich and well-structured to small and poorly structured. This enables us to conclude on the use of different evaluation approaches in different settings.
  2. Soergel, D.: SemWeb: proposal for an open, multifunctional, multilingual system for integrated access to knowledge about concepts and terminology (1996) 0.01
    0.010661725 = product of:
      0.0426469 = sum of:
        0.0426469 = weight(_text_:reference in 3575) [ClassicSimilarity], result of:
          0.0426469 = score(doc=3575,freq=2.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.22474778 = fieldWeight in 3575, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3575)
      0.25 = coord(1/4)
    
    Abstract
    Presents a proposal for the long-range development of an open, multifunctional, multilingual system for integrated access to many kinds of knowledge about concepts and terminology. The system would draw on existing knowledge bases that are accessible through the Internet or on CD-ROM and on a common integrated distributed knowledge base that would grow incrementally over time. Existing knowledge bases would be accessed througha common interface that would search several knowledge bases, collate the data into a common format, and present them to the user. The common integrated distributed knowldge base would provide an environment in which many contributors could carry out classification and terminological projects more efficiently, with the results available in a common format. Over time, data from other knowledge bases could be incorporated into the common knowledge base, either by actual transfer (provided the knowledge base producers are willing) or by reference through a link. Either way, such incorporation requires intellectual work but allows for tighter integration than common interface access to multiple knowledge bases. Each piece of information in the common knowledge base will have all its sources attached, providing an acknowledgment mechanism that gives due credit to all contributors. The whole system would be designed to be usable by many levels of users for improved information exchange.
  3. Soergel, D.: SemWeb: Proposal for an Open, multifunctional, multilingual system for integrated access to knowledge about concepts and terminology : exploration and development of the concept (1996) 0.01
    0.010661725 = product of:
      0.0426469 = sum of:
        0.0426469 = weight(_text_:reference in 3576) [ClassicSimilarity], result of:
          0.0426469 = score(doc=3576,freq=2.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.22474778 = fieldWeight in 3576, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3576)
      0.25 = coord(1/4)
    
    Abstract
    This paper presents a proposal for the long-range development of an open, multifunctional, multilingual system for integrated access to many kinds of knowledge about concepts and terminology. The system would draw on existing knowledge bases that are accessible through the Internet or on CD-ROM an on a common integrated distributed knowledge base that would grow incrementally over time. Existing knowledge bases would be accessed through a common interface that would search several knowledge bases, collate the data into a common format, and present them to the user. The common integrated distributed knowledge base would provide an environment in which many contributors could carry out classification and terminological projects more efficiently, with the results available in a common format. Over time, data from other knowledge bases could be incorporated into the common knowledge base, either by actual transfer (provided the knowledge base producers are willing) or by reference through a link. Either way, such incorporation requires intellectual work but allows for tighter integration than common interface access to multiple knowledge bases. Each piece of information in the common knowledge base will have all its sources attached, providing an acknowledgment mechanism that gives due credit to all contributors. The whole system woul be designed to be usable by many levels of users for improved information exchange.
  4. Pieterse, V.; Kourie, D.G.: Lists, taxonomies, lattices, thesauri and ontologies : paving a pathway through a terminological jungle (2014) 0.01
    0.010661725 = product of:
      0.0426469 = sum of:
        0.0426469 = weight(_text_:reference in 1386) [ClassicSimilarity], result of:
          0.0426469 = score(doc=1386,freq=2.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.22474778 = fieldWeight in 1386, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1386)
      0.25 = coord(1/4)
    
    Abstract
    This article seeks to resolve ambiguities and create a shared vocabulary with reference to classification-related terms. Due to the need to organize information in all disciplines, knowledge organization systems (KOSs) with varying attributes, content and structures have been developed independently in different domains. These scattered developments have given rise to a conglomeration of classification-related terms which are often used inconsistently both within and across different research fields. This terminological conundrum has impeded communication among researchers. To build the ideal Semantic Web, this problem will have to be surmounted. A common nomenclature is needed to incorporate the vast body of semantic information embedded in existing classifications when developing new systems and to facilitate interoperability among diverse systems. To bridge the terminological gap between the researchers and practitioners of disparate disciplines, we have identified five broad classes of KOSs: lists, taxonomies, lattices, thesauri and ontologies. We provide definitions of the terms catalogue, index, lexicon, knowledge base and topic map. After explaining the meaning and usage of these terms, we delineate how they relate to one another as well as to the different types of KOSs. Our definitions are not intended to replace established definitions but rather to clarify their respective meanings and to advocate their proper usage. In particular we caution against the indiscriminate use of the term ontology in contexts where, in our view, the term thesaurus would be more appropriate.
  5. Vlachidis, A.; Tudhope, D.: ¬A knowledge-based approach to information extraction for semantic interoperability in the archaeology domain (2016) 0.01
    0.010661725 = product of:
      0.0426469 = sum of:
        0.0426469 = weight(_text_:reference in 2895) [ClassicSimilarity], result of:
          0.0426469 = score(doc=2895,freq=2.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.22474778 = fieldWeight in 2895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2895)
      0.25 = coord(1/4)
    
    Abstract
    The article presents a method for automatic semantic indexing of archaeological grey-literature reports using empirical (rule-based) Information Extraction techniques in combination with domain-specific knowledge organization systems. The semantic annotation system (OPTIMA) performs the tasks of Named Entity Recognition, Relation Extraction, Negation Detection, and Word-Sense Disambiguation using hand-crafted rules and terminological resources for associating contextual abstractions with classes of the standard ontology CIDOC Conceptual Reference Model (CRM) for cultural heritage and its archaeological extension, CRM-EH. Relation Extraction (RE) performance benefits from a syntactic-based definition of RE patterns derived from domain oriented corpus analysis. The evaluation also shows clear benefit in the use of assistive natural language processing (NLP) modules relating to Word-Sense Disambiguation, Negation Detection, and Noun Phrase Validation, together with controlled thesaurus expansion. The semantic indexing results demonstrate the capacity of rule-based Information Extraction techniques to deliver interoperable semantic abstractions (semantic annotations) with respect to the CIDOC CRM and archaeological thesauri. Major contributions include recognition of relevant entities using shallow parsing NLP techniques driven by a complimentary use of ontological and terminological domain resources and empirical derivation of context-driven RE rules for the recognition of semantic relationships from phrases of unstructured text.
  6. Wen, B.; Horlings, E.; Zouwen, M. van der; Besselaar, P. van den: Mapping science through bibliometric triangulation : an experimental approach applied to water research (2017) 0.01
    0.010661725 = product of:
      0.0426469 = sum of:
        0.0426469 = weight(_text_:reference in 3437) [ClassicSimilarity], result of:
          0.0426469 = score(doc=3437,freq=2.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.22474778 = fieldWeight in 3437, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3437)
      0.25 = coord(1/4)
    
    Abstract
    The idea of constructing science maps based on bibliographic data has intrigued researchers for decades, and various techniques have been developed to map the structure of research disciplines. Most science mapping studies use a single method. However, as research fields have various properties, a valid map of a field should actually be composed of a set of maps derived from a series of investigations using different methods. That leads to the question of what can be learned from a combination-triangulation-of these different science maps. In this paper we propose a method for triangulation, using the example of water science. We combine three different mapping approaches: journal-journal citation relations (JJCR), shared author keywords (SAK), and title word-cited reference co-occurrence (TWRC). Our results demonstrate that triangulation of JJCR, SAK, and TWRC produces a more comprehensive picture than each method applied individually. The outcomes from the three different approaches can be associated with each other and systematically interpreted to provide insights into the complex multidisciplinary structure of the field of water research.
  7. Amirhosseini, M.: Theoretical base of quantitative evaluation of unity in a thesaurus term network based on Kant's epistemology (2010) 0.01
    0.010661725 = product of:
      0.0426469 = sum of:
        0.0426469 = weight(_text_:reference in 5854) [ClassicSimilarity], result of:
          0.0426469 = score(doc=5854,freq=2.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.22474778 = fieldWeight in 5854, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5854)
      0.25 = coord(1/4)
    
    Abstract
    The quantitative evaluation of thesauri has been carried out much further since 1976. This type of evaluation is based on counting of special factors in thesaurus structure, some of which are counting preferred terms, non preferred terms, cross reference terms and so on. Therefore, various statistical tests have been proposed and applied for evaluation of thesauri. In this article, we try to explain some ratios in the field of unity quantitative evaluation in a thesaurus term network. Theoretical base of the ratios' indicators and indices construction, and epistemological thought in this type of quantitative evaluation, are discussed in this article. The theoretical base of quantitative evaluation is the epistemological thought of Immanuel Kant's Critique of pure reason. The cognition states of transcendental understanding are divided into three steps, the first is perception, the second combination and the third, relation making. Terms relation domains and conceptual relation domains can be analyzed with ratios. The use of quantitative evaluations in current research in the field of thesaurus construction prepares a basis for a restoration period. In modern thesaurus construction, traditional term relations are analyzed in detail in the form of new conceptual relations. Hence, the new domains of hierarchical and associative relations are constructed in the form of relations between concepts. The newly formed conceptual domains can be a suitable basis for quantitative evaluation analysis in conceptual relations.
  8. Onofri, A.: Concepts in context (2013) 0.01
    0.01055457 = product of:
      0.04221828 = sum of:
        0.04221828 = weight(_text_:reference in 1077) [ClassicSimilarity], result of:
          0.04221828 = score(doc=1077,freq=4.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.22248895 = fieldWeight in 1077, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1077)
      0.25 = coord(1/4)
    
    Abstract
    My thesis discusses two related problems that have taken center stage in the recent literature on concepts: 1) What are the individuation conditions of concepts? Under what conditions is a concept Cv(1) the same concept as a concept Cv(2)? 2) What are the possession conditions of concepts? What conditions must be satisfied for a thinker to have a concept C? The thesis defends a novel account of concepts, which I call "pluralist-contextualist": 1) Pluralism: Different concepts have different kinds of individuation and possession conditions: some concepts are individuated more "coarsely", have less demanding possession conditions and are widely shared, while other concepts are individuated more "finely" and not shared. 2) Contextualism: When a speaker ascribes a propositional attitude to a subject S, or uses his ascription to explain/predict S's behavior, the speaker's intentions in the relevant context determine the correct individuation conditions for the concepts involved in his report. In chapters 1-3 I defend a contextualist, non-Millian theory of propositional attitude ascriptions. Then, I show how contextualism can be used to offer a novel perspective on the problem of concept individuation/possession. More specifically, I employ contextualism to provide a new, more effective argument for Fodor's "publicity principle": if contextualism is true, then certain specific concepts must be shared in order for interpersonally applicable psychological generalizations to be possible. In chapters 4-5 I raise a tension between publicity and another widely endorsed principle, the "Fregean constraint" (FC): subjects who are unaware of certain identity facts and find themselves in so-called "Frege cases" must have distinct concepts for the relevant object x. For instance: the ancient astronomers had distinct concepts (HESPERUS/PHOSPHORUS) for the same object (the planet Venus). First, I examine some leading theories of concepts and argue that they cannot meet both of our constraints at the same time. Then, I offer principled reasons to think that no theory can satisfy (FC) while also respecting publicity. (FC) appears to require a form of holism, on which a concept is individuated by its global inferential role in a subject S and can thus only be shared by someone who has exactly the same inferential dispositions as S. This explains the tension between publicity and (FC), since holism is clearly incompatible with concept shareability. To solve the tension, I suggest adopting my pluralist-contextualist proposal: concepts involved in Frege cases are holistically individuated and not public, while other concepts are more coarsely individuated and widely shared; given this "plurality" of concepts, we will then need contextual factors (speakers' intentions) to "select" the specific concepts to be employed in our intentional generalizations in the relevant contexts. In chapter 6 I develop the view further by contrasting it with some rival accounts. First, I examine a very different kind of pluralism about concepts, which has been recently defended by Daniel Weiskopf, and argue that it is insufficiently radical. Then, I consider the inferentialist accounts defended by authors like Peacocke, Rey and Jackson. Such views, I argue, are committed to an implausible picture of reference determination, on which our inferential dispositions fix the reference of our concepts: this leads to wrong predictions in all those cases of scientific disagreement where two parties have very different inferential dispositions and yet seem to refer to the same natural kind.
  9. Burstein, M.; McDermott, D.V.: Ontology translation for interoperability among Semantic Web services (2005) 0.01
    0.009707294 = product of:
      0.038829178 = sum of:
        0.038829178 = product of:
          0.077658355 = sum of:
            0.077658355 = weight(_text_:services in 2661) [ClassicSimilarity], result of:
              0.077658355 = score(doc=2661,freq=10.0), product of:
                0.1712379 = queryWeight, product of:
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.04664141 = queryNorm
                0.45351148 = fieldWeight in 2661, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2661)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Research on semantic web services promises greater interoperability among software agents and web services by enabling content-based automated service discovery and interaction and by utilizing. Although this is to be based on use of shared ontologies published on the semantic web, services produced and described by different developers may well use different, perhaps partly overlapping, sets of ontologies. Interoperability will depend on ontology mappings and architectures supporting the associated translation processes. The question we ask is, does the traditional approach of introducing mediator agents to translate messages between requestors and services work in such an open environment? This article reviews some of the processing assumptions that were made in the development of the semantic web service modeling ontology OWL-S and argues that, as a practical matter, the translation function cannot always be isolated in mediators. Ontology mappings need to be published on the semantic web just as ontologies themselves are. The translation for service discovery, service process model interpretation, task negotiation, service invocation, and response interpretation may then be distributed to various places in the architecture so that translation can be done in the specific goal-oriented informational contexts of the agents performing these processes. We present arguments for assigning translation responsibility to particular agents in the cases of service invocation, response translation, and match- making.
  10. Panzer, M.: Towards the "webification" of controlled subject vocabulary : a case study involving the Dewey Decimal Classification (2007) 0.01
    0.008595204 = product of:
      0.034380816 = sum of:
        0.034380816 = product of:
          0.06876163 = sum of:
            0.06876163 = weight(_text_:services in 538) [ClassicSimilarity], result of:
              0.06876163 = score(doc=538,freq=4.0), product of:
                0.1712379 = queryWeight, product of:
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.04664141 = queryNorm
                0.40155616 = fieldWeight in 538, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=538)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The presentation will briefly introduce a series of major principles for bringing subject terminology to the network level. A closer look at one KOS in particular, the Dewey Decimal Classification, should help to gain more insight into the perceived difficulties and potential benefits of building taxonomy services out and on top of classic large-scale vocabularies or taxonomies.
    Content
    Präsentation während der Veranstaltung "Networked Knowledge Organization Systems and Services: The 6th European Networked Knowledge Organization Systems (NKOS) Workshop, Workshop at the 11th ECDL Conference, Budapest, Hungary, September 21st 2007".
  11. Schmitz-Esser, W.: Language of general communication and concept compatibility (1996) 0.01
    0.007899084 = product of:
      0.031596337 = sum of:
        0.031596337 = product of:
          0.06319267 = sum of:
            0.06319267 = weight(_text_:22 in 6089) [ClassicSimilarity], result of:
              0.06319267 = score(doc=6089,freq=2.0), product of:
                0.16333027 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04664141 = queryNorm
                0.38690117 = fieldWeight in 6089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6089)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Pages
    S.11-22
  12. Frické, M.: Logic and the organization of information (2012) 0.01
    0.007463207 = product of:
      0.029852828 = sum of:
        0.029852828 = weight(_text_:reference in 1782) [ClassicSimilarity], result of:
          0.029852828 = score(doc=1782,freq=2.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.15732343 = fieldWeight in 1782, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1782)
      0.25 = coord(1/4)
    
    Abstract
    Logic and the Organization of Information closely examines the historical and contemporary methodologies used to catalogue information objects-books, ebooks, journals, articles, web pages, images, emails, podcasts and more-in the digital era. This book provides an in-depth technical background for digital librarianship, and covers a broad range of theoretical and practical topics including: classification theory, topic annotation, automatic clustering, generalized synonymy and concept indexing, distributed libraries, semantic web ontologies and Simple Knowledge Organization System (SKOS). It also analyzes the challenges facing today's information architects, and outlines a series of techniques for overcoming them. Logic and the Organization of Information is intended for practitioners and professionals working at a design level as a reference book for digital librarianship. Advanced-level students, researchers and academics studying information science, library science, digital libraries and computer science will also find this book invaluable.
  13. Cimiano, P.; Völker, J.; Studer, R.: Ontologies on demand? : a description of the state-of-the-art, applications, challenges and trends for ontology learning from text (2006) 0.01
    0.0073673185 = product of:
      0.029469274 = sum of:
        0.029469274 = product of:
          0.058938548 = sum of:
            0.058938548 = weight(_text_:services in 6014) [ClassicSimilarity], result of:
              0.058938548 = score(doc=6014,freq=4.0), product of:
                0.1712379 = queryWeight, product of:
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.04664141 = queryNorm
                0.344191 = fieldWeight in 6014, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6014)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Ontologies are nowadays used for many applications requiring data, services and resources in general to be interoperable and machine understandable. Such applications are for example web service discovery and composition, information integration across databases, intelligent search, etc. The general idea is that data and services are semantically described with respect to ontologies, which are formal specifications of a domain of interest, and can thus be shared and reused in a way such that the shared meaning specified by the ontology remains formally the same across different parties and applications. As the cost of creating ontologies is relatively high, different proposals have emerged for learning ontologies from structured and unstructured resources. In this article we examine the maturity of techniques for ontology learning from textual resources, addressing the question whether the state-of-the-art is mature enough to produce ontologies 'on demand'.
  14. Kruk, S.R.; Cygan, M.; Gzella, A.; Woroniecki, T.; Dabrowski, M.: JeromeDL: the social semantic digital library (2009) 0.01
    0.0073673185 = product of:
      0.029469274 = sum of:
        0.029469274 = product of:
          0.058938548 = sum of:
            0.058938548 = weight(_text_:services in 3383) [ClassicSimilarity], result of:
              0.058938548 = score(doc=3383,freq=4.0), product of:
                0.1712379 = queryWeight, product of:
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.04664141 = queryNorm
                0.344191 = fieldWeight in 3383, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3383)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The initial research on semantic digital libraries resulted in the design and implementation of JeromeDL; current research on online social networking and information discovery delivered new sets of features that were implemented in JeromeDL. Eventually, this digital library has been redesigned to follow the architecture of a social semantic digital library. JeromeDL describes each resource using three types of metadata: structure, bibliographic and community. It delivers services leveraging each of these information types. Annotations based on the structure and legacy metadata, and bibliographic ontology are rendered to the users in one, mixed, representation of library resources. Community annotations are managed by separate services, such as social semantic collaborative filtering or blogging component
  15. Broughton, V.: Facet analysis as a fundamental theory for structuring subject organization tools (2007) 0.01
    0.006945974 = product of:
      0.027783897 = sum of:
        0.027783897 = product of:
          0.055567794 = sum of:
            0.055567794 = weight(_text_:services in 537) [ClassicSimilarity], result of:
              0.055567794 = score(doc=537,freq=2.0), product of:
                0.1712379 = queryWeight, product of:
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.04664141 = queryNorm
                0.3245064 = fieldWeight in 537, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.0625 = fieldNorm(doc=537)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Content
    Präsentation während der Veranstaltung "Networked Knowledge Organization Systems and Services: The 6th European Networked Knowledge Organization Systems (NKOS) Workshop, Workshop at the 11th ECDL Conference, Budapest, Hungary, September 21st 2007".
  16. OWL Web Ontology Language Use Cases and Requirements (2004) 0.01
    0.006945974 = product of:
      0.027783897 = sum of:
        0.027783897 = product of:
          0.055567794 = sum of:
            0.055567794 = weight(_text_:services in 4686) [ClassicSimilarity], result of:
              0.055567794 = score(doc=4686,freq=2.0), product of:
                0.1712379 = queryWeight, product of:
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.04664141 = queryNorm
                0.3245064 = fieldWeight in 4686, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4686)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This document specifies usage scenarios, goals and requirements for a web ontology language. An ontology formally defines a common set of terms that are used to describe and represent a domain. Ontologies can be used by automated tools to power advanced services such as more accurate web search, intelligent software agents and knowledge management.
  17. Börner, K.: Atlas of knowledge : anyone can map (2015) 0.01
    0.0067025954 = product of:
      0.026810382 = sum of:
        0.026810382 = product of:
          0.053620763 = sum of:
            0.053620763 = weight(_text_:22 in 3355) [ClassicSimilarity], result of:
              0.053620763 = score(doc=3355,freq=4.0), product of:
                0.16333027 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04664141 = queryNorm
                0.32829654 = fieldWeight in 3355, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3355)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 1.2017 16:54:03
    22. 1.2017 17:10:56
  18. Synak, M.; Dabrowski, M.; Kruk, S.R.: Semantic Web and ontologies (2009) 0.01
    0.006319267 = product of:
      0.025277069 = sum of:
        0.025277069 = product of:
          0.050554138 = sum of:
            0.050554138 = weight(_text_:22 in 3376) [ClassicSimilarity], result of:
              0.050554138 = score(doc=3376,freq=2.0), product of:
                0.16333027 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04664141 = queryNorm
                0.30952093 = fieldWeight in 3376, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3376)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    31. 7.2010 16:58:22
  19. OWL Web Ontology Language Test Cases (2004) 0.01
    0.006319267 = product of:
      0.025277069 = sum of:
        0.025277069 = product of:
          0.050554138 = sum of:
            0.050554138 = weight(_text_:22 in 4685) [ClassicSimilarity], result of:
              0.050554138 = score(doc=4685,freq=2.0), product of:
                0.16333027 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04664141 = queryNorm
                0.30952093 = fieldWeight in 4685, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4685)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    14. 8.2011 13:33:22
  20. Giunchiglia, F.; Villafiorita, A.; Walsh, T.: Theories of abstraction (1997) 0.01
    0.006319267 = product of:
      0.025277069 = sum of:
        0.025277069 = product of:
          0.050554138 = sum of:
            0.050554138 = weight(_text_:22 in 4476) [ClassicSimilarity], result of:
              0.050554138 = score(doc=4476,freq=2.0), product of:
                0.16333027 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04664141 = queryNorm
                0.30952093 = fieldWeight in 4476, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4476)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    1.10.2018 14:13:22

Authors

Years

Types

  • a 66
  • el 24
  • x 5
  • m 4
  • n 4
  • p 1
  • s 1
  • More… Less…