Search (77 results, page 1 of 4)

  • × theme_ss:"Semantic Web"
  • × theme_ss:"Wissensrepräsentation"
  1. Baker, T.; Bermès, E.; Coyle, K.; Dunsire, G.; Isaac, A.; Murray, P.; Panzer, M.; Schneider, J.; Singer, R.; Summers, E.; Waites, W.; Young, J.; Zeng, M.: Library Linked Data Incubator Group Final Report (2011) 0.05
    0.04769266 = product of:
      0.09538532 = sum of:
        0.006866273 = weight(_text_:information in 4796) [ClassicSimilarity], result of:
          0.006866273 = score(doc=4796,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.0775819 = fieldWeight in 4796, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=4796)
        0.088519044 = weight(_text_:standards in 4796) [ClassicSimilarity], result of:
          0.088519044 = score(doc=4796,freq=8.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.39394283 = fieldWeight in 4796, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.03125 = fieldNorm(doc=4796)
      0.5 = coord(2/4)
    
    Abstract
    The mission of the W3C Library Linked Data Incubator Group, chartered from May 2010 through August 2011, has been "to help increase global interoperability of library data on the Web, by bringing together people involved in Semantic Web activities - focusing on Linked Data - in the library community and beyond, building on existing initiatives, and identifying collaboration tracks for the future." In Linked Data [LINKEDDATA], data is expressed using standards such as Resource Description Framework (RDF) [RDF], which specifies relationships between things, and Uniform Resource Identifiers (URIs, or "Web addresses") [URI]. This final report of the Incubator Group examines how Semantic Web standards and Linked Data principles can be used to make the valuable information assets that library create and curate - resources such as bibliographic data, authorities, and concept schemes - more visible and re-usable outside of their original library context on the wider Web. The Incubator Group began by eliciting reports on relevant activities from parties ranging from small, independent projects to national library initiatives (see the separate report, Library Linked Data Incubator Group: Use Cases) [USECASE]. These use cases provided the starting point for the work summarized in the report: an analysis of the benefits of library Linked Data, a discussion of current issues with regard to traditional library data, existing library Linked Data initiatives, and legal rights over library data; and recommendations for next steps. The report also summarizes the results of a survey of current Linked Data technologies and an inventory of library Linked Data resources available today (see also the more detailed report, Library Linked Data Incubator Group: Datasets, Value Vocabularies, and Metadata Element Sets) [VOCABDATASET].
    Key recommendations of the report are: - That library leaders identify sets of data as possible candidates for early exposure as Linked Data and foster a discussion about Open Data and rights; - That library standards bodies increase library participation in Semantic Web standardization, develop library data standards that are compatible with Linked Data, and disseminate best-practice design patterns tailored to library Linked Data; - That data and systems designers design enhanced user services based on Linked Data capabilities, create URIs for the items in library datasets, develop policies for managing RDF vocabularies and their URIs, and express library data by re-using or mapping to existing Linked Data vocabularies; - That librarians and archivists preserve Linked Data element sets and value vocabularies and apply library experience in curation and long-term preservation to Linked Data datasets.
  2. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.05
    0.04765854 = product of:
      0.09531708 = sum of:
        0.07824052 = weight(_text_:standards in 4553) [ClassicSimilarity], result of:
          0.07824052 = score(doc=4553,freq=4.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.34819958 = fieldWeight in 4553, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4553)
        0.01707656 = product of:
          0.03415312 = sum of:
            0.03415312 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
              0.03415312 = score(doc=4553,freq=2.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.19345059 = fieldWeight in 4553, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4553)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Semantic Web knowledge representation standards, and in particular RDF and OWL, often come endowed with a formal semantics which is considered to be of fundamental importance for the field. Reasoning, i.e., the drawing of logical inferences from knowledge expressed in such standards, is traditionally based on logical deductive methods and algorithms which can be proven to be sound and complete and terminating, i.e. correct in a very strong sense. For various reasons, though, in particular the scalability issues arising from the ever increasing amounts of Semantic Web data available and the inability of deductive algorithms to deal with noise in the data, it has been argued that alternative means of reasoning should be investigated which bear high promise for high scalability and better robustness. From this perspective, deductive algorithms can be considered the gold standard regarding correctness against which alternative methods need to be tested. In this paper, we show that it is possible to train a Deep Learning system on RDF knowledge graphs, such that it is able to perform reasoning over new RDF knowledge graphs, with high precision and recall compared to the deductive gold standard.
    Date
    16.11.2018 14:22:01
  3. Handbook on ontologies (2004) 0.04
    0.03725811 = product of:
      0.07451622 = sum of:
        0.019191816 = weight(_text_:information in 1952) [ClassicSimilarity], result of:
          0.019191816 = score(doc=1952,freq=10.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.21684799 = fieldWeight in 1952, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1952)
        0.0553244 = weight(_text_:standards in 1952) [ClassicSimilarity], result of:
          0.0553244 = score(doc=1952,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.24621427 = fieldWeight in 1952, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1952)
      0.5 = coord(2/4)
    
    Abstract
    An ontology is a description (like a formal specification of a program) of concepts and relationships that can exist for an agent or a community of agents. The concept is important for the purpose of enabling knowledge sharing and reuse. The Handbook on Ontologies provides a comprehensive overview of the current status and future prospectives of the field of ontologies. The handbook demonstrates standards that have been created recently, it surveys methods that have been developed and it shows how to bring both into practice of ontology infrastructures and applications that are the best of their kind.
    LCSH
    Knowledge representation (Information theory)
    Conceptual structures (Information theory)
    Series
    International handbook on information systems
    Subject
    Knowledge representation (Information theory)
    Conceptual structures (Information theory)
  4. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.04
    0.036990546 = product of:
      0.07398109 = sum of:
        0.05338227 = product of:
          0.1601468 = sum of:
            0.1601468 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.1601468 = score(doc=701,freq=2.0), product of:
                0.42742437 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050415643 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.020598818 = weight(_text_:information in 701) [ClassicSimilarity], result of:
          0.020598818 = score(doc=701,freq=18.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.23274568 = fieldWeight in 701, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.5 = coord(2/4)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  5. SKOS Simple Knowledge Organization System Reference : W3C Recommendation 18 August 2009 (2009) 0.03
    0.033250365 = product of:
      0.06650073 = sum of:
        0.01029941 = weight(_text_:information in 4688) [ClassicSimilarity], result of:
          0.01029941 = score(doc=4688,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.116372846 = fieldWeight in 4688, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4688)
        0.05620132 = product of:
          0.11240264 = sum of:
            0.11240264 = weight(_text_:organization in 4688) [ClassicSimilarity], result of:
              0.11240264 = score(doc=4688,freq=14.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.62532854 = fieldWeight in 4688, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4688)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This document defines the Simple Knowledge Organization System (SKOS), a common data model for sharing and linking knowledge organization systems via the Web. Many knowledge organization systems, such as thesauri, taxonomies, classification schemes and subject heading systems, share a similar structure, and are used in similar applications. SKOS captures much of this similarity and makes it explicit, to enable data and technology sharing across diverse applications. The SKOS data model provides a standard, low-cost migration path for porting existing knowledge organization systems to the Semantic Web. SKOS also provides a lightweight, intuitive language for developing and sharing new knowledge organization systems. It may be used on its own, or in combination with formal knowledge representation languages such as the Web Ontology language (OWL). This document is the normative specification of the Simple Knowledge Organization System. It is intended for readers who are involved in the design and implementation of information systems, and who already have a good understanding of Semantic Web technology, especially RDF and OWL. For an informative guide to using SKOS, see the [SKOS-PRIMER].
  6. Zhitomirsky-Geffet, M.; Bar-Ilan, J.: Towards maximal unification of semantically diverse ontologies for controversial domains (2014) 0.03
    0.032677837 = product of:
      0.06535567 = sum of:
        0.009710376 = weight(_text_:information in 1634) [ClassicSimilarity], result of:
          0.009710376 = score(doc=1634,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.10971737 = fieldWeight in 1634, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1634)
        0.055645294 = sum of:
          0.028322803 = weight(_text_:organization in 1634) [ClassicSimilarity], result of:
            0.028322803 = score(doc=1634,freq=2.0), product of:
              0.17974974 = queryWeight, product of:
                3.5653565 = idf(docFreq=3399, maxDocs=44218)
                0.050415643 = queryNorm
              0.15756798 = fieldWeight in 1634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5653565 = idf(docFreq=3399, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
          0.027322493 = weight(_text_:22 in 1634) [ClassicSimilarity], result of:
            0.027322493 = score(doc=1634,freq=2.0), product of:
              0.17654699 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050415643 = queryNorm
              0.15476047 = fieldWeight in 1634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
      0.5 = coord(2/4)
    
    Abstract
    Purpose - Ontologies are prone to wide semantic variability due to subjective points of view of their composers. The purpose of this paper is to propose a new approach for maximal unification of diverse ontologies for controversial domains by their relations. Design/methodology/approach - Effective matching or unification of multiple ontologies for a specific domain is crucial for the success of many semantic web applications, such as semantic information retrieval and organization, document tagging, summarization and search. To this end, numerous automatic and semi-automatic techniques were proposed in the past decade that attempt to identify similar entities, mostly classes, in diverse ontologies for similar domains. Apparently, matching individual entities cannot result in full integration of ontologies' semantics without matching their inter-relations with all other-related classes (and instances). However, semantic matching of ontological relations still constitutes a major research challenge. Therefore, in this paper the authors propose a new paradigm for assessment of maximal possible matching and unification of ontological relations. To this end, several unification rules for ontological relations were devised based on ontological reference rules, and lexical and textual entailment. These rules were semi-automatically implemented to extend a given ontology with semantically matching relations from another ontology for a similar domain. Then, the ontologies were unified through these similar pairs of relations. The authors observe that these rules can be also facilitated to reveal the contradictory relations in different ontologies. Findings - To assess the feasibility of the approach two experiments were conducted with different sets of multiple personal ontologies on controversial domains constructed by trained subjects. The results for about 50 distinct ontology pairs demonstrate a good potential of the methodology for increasing inter-ontology agreement. Furthermore, the authors show that the presented methodology can lead to a complete unification of multiple semantically heterogeneous ontologies. Research limitations/implications - This is a conceptual study that presents a new approach for semantic unification of ontologies by a devised set of rules along with the initial experimental evidence of its feasibility and effectiveness. However, this methodology has to be fully automatically implemented and tested on a larger dataset in future research. Practical implications - This result has implication for semantic search, since a richer ontology, comprised of multiple aspects and viewpoints of the domain of knowledge, enhances discoverability and improves search results. Originality/value - To the best of the knowledge, this is the first study to examine and assess the maximal level of semantic relation-based ontology unification.
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 66(2014) no.5, S.494-518
  7. Guns, R.: Tracing the origins of the semantic web (2013) 0.03
    0.03195362 = product of:
      0.06390724 = sum of:
        0.008582841 = weight(_text_:information in 1093) [ClassicSimilarity], result of:
          0.008582841 = score(doc=1093,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.09697737 = fieldWeight in 1093, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1093)
        0.0553244 = weight(_text_:standards in 1093) [ClassicSimilarity], result of:
          0.0553244 = score(doc=1093,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.24621427 = fieldWeight in 1093, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1093)
      0.5 = coord(2/4)
    
    Abstract
    The Semantic Web has been criticized for not being semantic. This article examines the questions of why and how the Web of Data, expressed in the Resource Description Framework (RDF), has come to be known as the Semantic Web. Contrary to previous papers, we deliberately take a descriptive stance and do not start from preconceived ideas about the nature of semantics. Instead, we mainly base our analysis on early design documents of the (Semantic) Web. The main determining factor is shown to be link typing, coupled with the influence of online metadata. Both factors already were present in early web standards and drafts. Our findings indicate that the Semantic Web is directly linked to older artificial intelligence work, despite occasional claims to the contrary. Because of link typing, the Semantic Web can be considered an example of a semantic network. Originally network representations of the meaning of natural language utterances, semantic networks have eventually come to refer to any networks with typed (usually directed) links. We discuss possible causes for this shift and suggest that it may be due to confounding paradigmatic and syntagmatic semantic relations.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.10, S.2173-2181
  8. Zeng, M.L.; Fan, W.; Lin, X.: SKOS for an integrated vocabulary structure (2008) 0.03
    0.03178972 = product of:
      0.06357944 = sum of:
        0.044259522 = weight(_text_:standards in 2654) [ClassicSimilarity], result of:
          0.044259522 = score(doc=2654,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.19697142 = fieldWeight in 2654, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.03125 = fieldNorm(doc=2654)
        0.019319922 = product of:
          0.038639843 = sum of:
            0.038639843 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
              0.038639843 = score(doc=2654,freq=4.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.21886435 = fieldWeight in 2654, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2654)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In order to transfer the Chinese Classified Thesaurus (CCT) into a machine-processable format and provide CCT-based Web services, a pilot study has been conducted in which a variety of selected CCT classes and mapped thesaurus entries are encoded with SKOS. OWL and RDFS are also used to encode the same contents for the purposes of feasibility and cost-benefit comparison. CCT is a collected effort led by the National Library of China. It is an integration of the national standards Chinese Library Classification (CLC) 4th edition and Chinese Thesaurus (CT). As a manually created mapping product, CCT provides for each of the classes the corresponding thesaurus terms, and vice versa. The coverage of CCT includes four major clusters: philosophy, social sciences and humanities, natural sciences and technologies, and general works. There are 22 main-classes, 52,992 sub-classes and divisions, 110,837 preferred thesaurus terms, 35,690 entry terms (non-preferred terms), and 59,738 pre-coordinated headings (Chinese Classified Thesaurus, 2005) Major challenges of encoding this large vocabulary comes from its integrated structure. CCT is a result of the combination of two structures (illustrated in Figure 1): a thesaurus that uses ISO-2788 standardized structure and a classification scheme that is basically enumerative, but provides some flexibility for several kinds of synthetic mechanisms Other challenges include the complex relationships caused by differences of granularities of two original schemes and their presentation with various levels of SKOS elements; as well as the diverse coordination of entries due to the use of auxiliary tables and pre-coordinated headings derived from combining classes, subdivisions, and thesaurus terms, which do not correspond to existing unique identifiers. The poster reports the progress, shares the sample SKOS entries, and summarizes problems identified during the SKOS encoding process. Although OWL Lite and OWL Full provide richer expressiveness, the cost-benefit issues and the final purposes of encoding CCT raise questions of using such approaches.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  9. Fernández, M.; Cantador, I.; López, V.; Vallet, D.; Castells, P.; Motta, E.: Semantically enhanced Information Retrieval : an ontology-based approach (2011) 0.03
    0.026984949 = product of:
      0.053969897 = sum of:
        0.009710376 = weight(_text_:information in 230) [ClassicSimilarity], result of:
          0.009710376 = score(doc=230,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.10971737 = fieldWeight in 230, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=230)
        0.044259522 = weight(_text_:standards in 230) [ClassicSimilarity], result of:
          0.044259522 = score(doc=230,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.19697142 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.03125 = fieldNorm(doc=230)
      0.5 = coord(2/4)
    
    Abstract
    Currently, techniques for content description and query processing in Information Retrieval (IR) are based on keywords, and therefore provide limited capabilities to capture the conceptualizations associated with user needs and contents. Aiming to solve the limitations of keyword-based models, the idea of conceptual search, understood as searching by meanings rather than literal strings, has been the focus of a wide body of research in the IR field. More recently, it has been used as a prototypical scenario (or even envisioned as a potential "killer app") in the Semantic Web (SW) vision, since its emergence in the late nineties. However, current approaches to semantic search developed in the SW area have not yet taken full advantage of the acquired knowledge, accumulated experience, and technological sophistication achieved through several decades of work in the IR field. Starting from this position, this work investigates the definition of an ontology-based IR model, oriented to the exploitation of domain Knowledge Bases to support semantic search capabilities in large document repositories, stressing on the one hand the use of fully fledged ontologies in the semantic-based perspective, and on the other hand the consideration of unstructured content as the target search space. The major contribution of this work is an innovative, comprehensive semantic search model, which extends the classic IR model, addresses the challenges of the massive and heterogeneous Web environment, and integrates the benefits of both keyword and semantic-based search. Additional contributions include: an innovative rank fusion technique that minimizes the undesired effects of knowledge sparseness on the yet juvenile SW, and the creation of a large-scale evaluation benchmark, based on TREC IR evaluation standards, which allows a rigorous comparison between IR and SW approaches. Conducted experiments show that our semantic search model obtained comparable and better performance results (in terms of MAP and P@10 values) than the best TREC automatic system.
  10. Jacobs, I.: From chaos, order: W3C standard helps organize knowledge : SKOS Connects Diverse Knowledge Organization Systems to Linked Data (2009) 0.02
    0.022726912 = product of:
      0.045453824 = sum of:
        0.010406143 = weight(_text_:information in 3062) [ClassicSimilarity], result of:
          0.010406143 = score(doc=3062,freq=6.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.11757882 = fieldWeight in 3062, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3062)
        0.03504768 = product of:
          0.07009536 = sum of:
            0.07009536 = weight(_text_:organization in 3062) [ClassicSimilarity], result of:
              0.07009536 = score(doc=3062,freq=16.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.38996086 = fieldWeight in 3062, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3062)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    18 August 2009 -- Today W3C announces a new standard that builds a bridge between the world of knowledge organization systems - including thesauri, classifications, subject headings, taxonomies, and folksonomies - and the linked data community, bringing benefits to both. Libraries, museums, newspapers, government portals, enterprises, social networking applications, and other communities that manage large collections of books, historical artifacts, news reports, business glossaries, blog entries, and other items can now use Simple Knowledge Organization System (SKOS) to leverage the power of linked data. As different communities with expertise and established vocabularies use SKOS to integrate them into the Semantic Web, they increase the value of the information for everyone.
    Content
    SKOS Adapts to the Diversity of Knowledge Organization Systems A useful starting point for understanding the role of SKOS is the set of subject headings published by the US Library of Congress (LOC) for categorizing books, videos, and other library resources. These headings can be used to broaden or narrow queries for discovering resources. For instance, one can narrow a query about books on "Chinese literature" to "Chinese drama," or further still to "Chinese children's plays." Library of Congress subject headings have evolved within a community of practice over a period of decades. By now publishing these subject headings in SKOS, the Library of Congress has made them available to the linked data community, which benefits from a time-tested set of concepts to re-use in their own data. This re-use adds value ("the network effect") to the collection. When people all over the Web re-use the same LOC concept for "Chinese drama," or a concept from some other vocabulary linked to it, this creates many new routes to the discovery of information, and increases the chances that relevant items will be found. As an example of mapping one vocabulary to another, a combined effort from the STITCH, TELplus and MACS Projects provides links between LOC concepts and RAMEAU, a collection of French subject headings used by the Bibliothèque Nationale de France and other institutions. SKOS can be used for subject headings but also many other approaches to organizing knowledge. Because different communities are comfortable with different organization schemes, SKOS is designed to port diverse knowledge organization systems to the Web. "Active participation from the library and information science community in the development of SKOS over the past seven years has been key to ensuring that SKOS meets a variety of needs," said Thomas Baker, co-chair of the Semantic Web Deployment Working Group, which published SKOS. "One goal in creating SKOS was to provide new uses for well-established knowledge organization systems by providing a bridge to the linked data cloud." SKOS is part of the Semantic Web technology stack. Like the Web Ontology Language (OWL), SKOS can be used to define vocabularies. But the two technologies were designed to meet different needs. SKOS is a simple language with just a few features, tuned for sharing and linking knowledge organization systems such as thesauri and classification schemes. OWL offers a general and powerful framework for knowledge representation, where additional "rigor" can afford additional benefits (for instance, business rule processing). To get started with SKOS, see the SKOS Primer.
  11. Mayfield, J.; Finin, T.: Information retrieval on the Semantic Web : integrating inference and retrieval 0.02
    0.020450171 = product of:
      0.040900342 = sum of:
        0.01699316 = weight(_text_:information in 4330) [ClassicSimilarity], result of:
          0.01699316 = score(doc=4330,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.1920054 = fieldWeight in 4330, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4330)
        0.023907183 = product of:
          0.047814365 = sum of:
            0.047814365 = weight(_text_:22 in 4330) [ClassicSimilarity], result of:
              0.047814365 = score(doc=4330,freq=2.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.2708308 = fieldWeight in 4330, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4330)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    One vision of the Semantic Web is that it will be much like the Web we know today, except that documents will be enriched by annotations in machine understandable markup. These annotations will provide metadata about the documents as well as machine interpretable statements capturing some of the meaning of document content. We discuss how the information retrieval paradigm might be recast in such an environment. We suggest that retrieval can be tightly bound to inference. Doing so makes today's Web search engines useful to Semantic Web inference engines, and causes improvements in either retrieval or inference to lead directly to improvements in the other.
    Date
    12. 2.2011 17:35:22
  12. Soergel, D.: SemWeb: proposal for an open, multifunctional, multilingual system for integrated access to knowledge about concepts and terminology (1996) 0.02
    0.018586013 = product of:
      0.037172027 = sum of:
        0.01213797 = weight(_text_:information in 3575) [ClassicSimilarity], result of:
          0.01213797 = score(doc=3575,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.13714671 = fieldWeight in 3575, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3575)
        0.025034059 = product of:
          0.050068118 = sum of:
            0.050068118 = weight(_text_:organization in 3575) [ClassicSimilarity], result of:
              0.050068118 = score(doc=3575,freq=4.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.27854347 = fieldWeight in 3575, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3575)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Presents a proposal for the long-range development of an open, multifunctional, multilingual system for integrated access to many kinds of knowledge about concepts and terminology. The system would draw on existing knowledge bases that are accessible through the Internet or on CD-ROM and on a common integrated distributed knowledge base that would grow incrementally over time. Existing knowledge bases would be accessed througha common interface that would search several knowledge bases, collate the data into a common format, and present them to the user. The common integrated distributed knowldge base would provide an environment in which many contributors could carry out classification and terminological projects more efficiently, with the results available in a common format. Over time, data from other knowledge bases could be incorporated into the common knowledge base, either by actual transfer (provided the knowledge base producers are willing) or by reference through a link. Either way, such incorporation requires intellectual work but allows for tighter integration than common interface access to multiple knowledge bases. Each piece of information in the common knowledge base will have all its sources attached, providing an acknowledgment mechanism that gives due credit to all contributors. The whole system would be designed to be usable by many levels of users for improved information exchange.
    Series
    Advances in knowledge organization; vol.5
    Source
    Knowledge organization and change: Proceedings of the Fourth International ISKO Conference, 15-18 July 1996, Library of Congress, Washington, DC. Ed.: R. Green
  13. Davies, J.; Duke, A.; Stonkus, A.: OntoShare: evolving ontologies in a knowledge sharing system (2004) 0.02
    0.017773904 = product of:
      0.035547808 = sum of:
        0.018023966 = weight(_text_:information in 4409) [ClassicSimilarity], result of:
          0.018023966 = score(doc=4409,freq=18.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.20365247 = fieldWeight in 4409, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4409)
        0.01752384 = product of:
          0.03504768 = sum of:
            0.03504768 = weight(_text_:organization in 4409) [ClassicSimilarity], result of:
              0.03504768 = score(doc=4409,freq=4.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.19498043 = fieldWeight in 4409, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4409)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    We saw in the introduction how the Semantic Web makes possible a new generation of knowledge management tools. We now turn our attention more specifically to Semantic Web based support for virtual communities of practice. The notion of communities of practice has attracted much attention in the field of knowledge management. Communities of practice are groups within (or sometimes across) organizations who share a common set of information needs or problems. They are typically not a formal organizational unit but an informal network, each sharing in part a common agenda and shared interests or issues. In one example it was found that a lot of knowledge sharing among copier engineers took place through informal exchanges, often around a water cooler. As well as local, geographically based communities, trends towards flexible working and globalisation have led to interest in supporting dispersed communities using Internet technology. The challenge for organizations is to support such communities and make them effective. Provided with an ontology meeting the needs of a particular community of practice, knowledge management tools can arrange knowledge assets into the predefined conceptual classes of the ontology, allowing more natural and intuitive access to knowledge. Knowledge management tools must give users the ability to organize information into a controllable asset. Building an intranet-based store of information is not sufficient for knowledge management; the relationships within the stored information are vital. These relationships cover such diverse issues as relative importance, context, sequence, significance, causality and association. The potential for knowledge management tools is vast; not only can they make better use of the raw information already available, but they can sift, abstract and help to share new information, and present it to users in new and compelling ways.
    In this chapter, we describe the OntoShare system which facilitates and encourages the sharing of information between communities of practice within (or perhaps across) organizations and which encourages people - who may not previously have known of each other's existence in a large organization - to make contact where there are mutual concerns or interests. As users contribute information to the community, a knowledge resource annotated with meta-data is created. Ontologies defined using the resource description framework (RDF) and RDF Schema (RDFS) are used in this process. RDF is a W3C recommendation for the formulation of meta-data for WWW resources. RDF(S) extends this standard with the means to specify domain vocabulary and object structures - that is, concepts and the relationships that hold between them. In the next section, we describe in detail the way in which OntoShare can be used to share and retrieve knowledge and how that knowledge is represented in an RDF-based ontology. We then proceed to discuss in Section 10.3 how the ontologies in OntoShare evolve over time based on user interaction with the system and motivate our approach to user-based creation of RDF-annotated information resources. The way in which OntoShare can help to locate expertise within an organization is then described, followed by a discussion of the sociotechnical issues of deploying such a tool. Finally, a planned evaluation exercise and avenues for further research are outlined.
  14. Chaudhury, S.; Mallik, A.; Ghosh, H.: Multimedia ontology : representation and applications (2016) 0.02
    0.017433718 = product of:
      0.034867436 = sum of:
        0.017165681 = weight(_text_:information in 2801) [ClassicSimilarity], result of:
          0.017165681 = score(doc=2801,freq=8.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.19395474 = fieldWeight in 2801, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2801)
        0.017701752 = product of:
          0.035403505 = sum of:
            0.035403505 = weight(_text_:organization in 2801) [ClassicSimilarity], result of:
              0.035403505 = score(doc=2801,freq=2.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.19695997 = fieldWeight in 2801, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2801)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The book covers multimedia ontology in heritage preservation with intellectual explorations of various themes of Indian cultural heritage. The result of more than 15 years of collective research, Multimedia Ontology: Representation and Applications provides a theoretical foundation for understanding the nature of media data and the principles involved in its interpretation. The book presents a unified approach to recent advances in multimedia and explains how a multimedia ontology can fill the semantic gap between concepts and the media world. It relays real-life examples of implementations in different domains to illustrate how this gap can be filled. The book contains information that helps with building semantic, content-based search and retrieval engines and also with developing vertical application-specific search applications. It guides you in designing multimedia tools that aid in logical and conceptual organization of large amounts of multimedia data. As a practical demonstration, it showcases multimedia applications in cultural heritage preservation efforts and the creation of virtual museums. The book describes the limitations of existing ontology techniques in semantic multimedia data processing, as well as some open problems in the representations and applications of multimedia ontology. As an antidote, it introduces new ontology representation and reasoning schemes that overcome these limitations. The long, compiled efforts reflected in Multimedia Ontology: Representation and Applications are a signpost for new achievements and developments in efficiency and accessibility in the field.
    Footnote
    Rez. in: Annals of Library and Information Studies 62(2015) no.4, S.299-300 (A.K. Das)
    LCSH
    Information storage and retrieval systems
    Subject
    Information storage and retrieval systems
  15. Prud'hommeaux, E.; Gayo, E.: RDF ventures to boldly meet your most pedestrian needs (2015) 0.02
    0.015395639 = product of:
      0.030791279 = sum of:
        0.01029941 = weight(_text_:information in 2024) [ClassicSimilarity], result of:
          0.01029941 = score(doc=2024,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.116372846 = fieldWeight in 2024, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2024)
        0.02049187 = product of:
          0.04098374 = sum of:
            0.04098374 = weight(_text_:22 in 2024) [ClassicSimilarity], result of:
              0.04098374 = score(doc=2024,freq=2.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.23214069 = fieldWeight in 2024, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2024)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Bulletin of the Association for Information Science and Technology. 41(2015) no.4, S.18-22
  16. Soergel, D.: SemWeb: Proposal for an Open, multifunctional, multilingual system for integrated access to knowledge about concepts and terminology : exploration and development of the concept (1996) 0.01
    0.014919861 = product of:
      0.029839722 = sum of:
        0.01213797 = weight(_text_:information in 3576) [ClassicSimilarity], result of:
          0.01213797 = score(doc=3576,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.13714671 = fieldWeight in 3576, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3576)
        0.017701752 = product of:
          0.035403505 = sum of:
            0.035403505 = weight(_text_:organization in 3576) [ClassicSimilarity], result of:
              0.035403505 = score(doc=3576,freq=2.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.19695997 = fieldWeight in 3576, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3576)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This paper presents a proposal for the long-range development of an open, multifunctional, multilingual system for integrated access to many kinds of knowledge about concepts and terminology. The system would draw on existing knowledge bases that are accessible through the Internet or on CD-ROM an on a common integrated distributed knowledge base that would grow incrementally over time. Existing knowledge bases would be accessed through a common interface that would search several knowledge bases, collate the data into a common format, and present them to the user. The common integrated distributed knowledge base would provide an environment in which many contributors could carry out classification and terminological projects more efficiently, with the results available in a common format. Over time, data from other knowledge bases could be incorporated into the common knowledge base, either by actual transfer (provided the knowledge base producers are willing) or by reference through a link. Either way, such incorporation requires intellectual work but allows for tighter integration than common interface access to multiple knowledge bases. Each piece of information in the common knowledge base will have all its sources attached, providing an acknowledgment mechanism that gives due credit to all contributors. The whole system woul be designed to be usable by many levels of users for improved information exchange.
    Content
    Expanded version of a paper published in Advances in Knowledge Organization v.5 (1996): 165-173 (4th Annual ISKO Conference, Washington, D.C., 1996 July 15-18): SemWeb: proposal for an open, multifunctional, multilingual system for integrated access to knowledge about concepts and terminology.
  17. Knowledge graphs : new directions for knowledge representation on the Semantic Web (2019) 0.01
    0.0138311 = product of:
      0.0553244 = sum of:
        0.0553244 = weight(_text_:standards in 51) [ClassicSimilarity], result of:
          0.0553244 = score(doc=51,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.24621427 = fieldWeight in 51, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.0390625 = fieldNorm(doc=51)
      0.25 = coord(1/4)
    
    Abstract
    The increasingly pervasive nature of the Web, expanding to devices and things in everydaylife, along with new trends in Artificial Intelligence call for new paradigms and a new look onKnowledge Representation and Processing at scale for the Semantic Web. The emerging, but stillto be concretely shaped concept of "Knowledge Graphs" provides an excellent unifying metaphorfor this current status of Semantic Web research. More than two decades of Semantic Webresearch provides a solid basis and a promising technology and standards stack to interlink data,ontologies and knowledge on the Web. However, neither are applications for Knowledge Graphsas such limited to Linked Open Data, nor are instantiations of Knowledge Graphs in enterprises- while often inspired by - limited to the core Semantic Web stack. This report documents theprogram and the outcomes of Dagstuhl Seminar 18371 "Knowledge Graphs: New Directions forKnowledge Representation on the Semantic Web", where a group of experts from academia andindustry discussed fundamental questions around these topics for a week in early September 2018,including the following: what are knowledge graphs? Which applications do we see to emerge?Which open research questions still need be addressed and which technology gaps still need tobe closed?
  18. Pattuelli, M.C.; Rubinow, S.: Charting DBpedia : towards a cartography of a major linked dataset (2012) 0.01
    0.010731118 = product of:
      0.04292447 = sum of:
        0.04292447 = product of:
          0.08584894 = sum of:
            0.08584894 = weight(_text_:organization in 829) [ClassicSimilarity], result of:
              0.08584894 = score(doc=829,freq=6.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.47760257 = fieldWeight in 829, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=829)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This paper provides an analysis of the knowledge structure underlying DBpedia, one of the largest and most heavily used datasets in the current Linked Data landscape. The study reveals an evolving knowledge representation environment where different descriptive and classification approaches are employed concurrently. This analysis opens up a new area of research to which the knowledge organization community can make a significant contribution.
    Series
    Advances in knowledge organization; vol.13
    Source
    Categories, contexts and relations in knowledge organization: Proceedings of the Twelfth International ISKO Conference 6-9 August 2012, Mysore, India. Eds.: Neelameghan, A. u. K.S. Raghavan
  19. Schmitz-Esser, W.; Sigel, A.: Introducing terminology-based ontologies : Papers and Materials presented by the authors at the workshop "Introducing Terminology-based Ontologies" (Poli/Schmitz-Esser/Sigel) at the 9th International Conference of the International Society for Knowledge Organization (ISKO), Vienna, Austria, July 6th, 2006 (2006) 0.01
    0.009198101 = product of:
      0.036792405 = sum of:
        0.036792405 = product of:
          0.07358481 = sum of:
            0.07358481 = weight(_text_:organization in 1285) [ClassicSimilarity], result of:
              0.07358481 = score(doc=1285,freq=6.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.40937364 = fieldWeight in 1285, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1285)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This work-in-progress communication contains the papers and materials presented by Winfried Schmitz-Esser and Alexander Sigel in the joint workshop (with Roberto Poli) "Introducing Terminology-based Ontologies" at the 9th International Conference of the International Society for Knowledge Organization (ISKO), Vienna, Austria, July 6th, 2006.
    Content
    Inhalt: 1. From traditional Knowledge Organization Systems (authority files, classifications, thesauri) towards ontologies on the web (Alexander Sigel) (Tutorial. Paper with Slides interspersed) pp. 3-53 2. Introduction to Integrative Cross-Language Ontology (ICLO): Formalizing and interrelating textual knowledge to enable intelligent action and knowledge sharing (Winfried Schmitz-Esser) pp. 54-113 3. First Idea Sketch on Modelling ICLO with Topic Maps (Alexander Sigel) (Work in progress paper. Topic maps available from the author) pp. 114-130
  20. Semantic applications (2018) 0.01
    0.0077364687 = product of:
      0.030945875 = sum of:
        0.030945875 = weight(_text_:information in 5204) [ClassicSimilarity], result of:
          0.030945875 = score(doc=5204,freq=26.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.34965688 = fieldWeight in 5204, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5204)
      0.25 = coord(1/4)
    
    Content
    Introduction.- Ontology Development.- Compliance using Metadata.- Variety Management for Big Data.- Text Mining in Economics.- Generation of Natural Language Texts.- Sentiment Analysis.- Building Concise Text Corpora from Web Contents.- Ontology-Based Modelling of Web Content.- Personalized Clinical Decision Support for Cancer Care.- Applications of Temporal Conceptual Semantic Systems.- Context-Aware Documentation in the Smart Factory.- Knowledge-Based Production Planning for Industry 4.0.- Information Exchange in Jurisdiction.- Supporting Automated License Clearing.- Managing cultural assets: Implementing typical cultural heritage archive's usage scenarios via Semantic Web technologies.- Semantic Applications for Process Management.- Domain-Specific Semantic Search Applications.
    LCSH
    Information storage and retrieval
    Management information systems
    Information Systems Applications (incl. Internet)
    Management of Computing and Information Systems
    Information Storage and Retrieval
    RSWK
    Information Retrieval
    Subject
    Information Retrieval
    Information storage and retrieval
    Management information systems
    Information Systems Applications (incl. Internet)
    Management of Computing and Information Systems
    Information Storage and Retrieval

Years

Languages

  • e 69
  • d 8

Types

  • a 40
  • el 33
  • m 11
  • s 7
  • n 5
  • x 4
  • r 2
  • More… Less…

Subjects