Search (69 results, page 1 of 4)

  • × theme_ss:"Semantic Web"
  1. Schmitz-Esser, W.; Sigel, A.: Introducing terminology-based ontologies : Papers and Materials presented by the authors at the workshop "Introducing Terminology-based Ontologies" (Poli/Schmitz-Esser/Sigel) at the 9th International Conference of the International Society for Knowledge Organization (ISKO), Vienna, Austria, July 6th, 2006 (2006) 0.04
    0.043012477 = sum of:
      0.01794036 = product of:
        0.07176144 = sum of:
          0.07176144 = weight(_text_:authors in 1285) [ClassicSimilarity], result of:
            0.07176144 = score(doc=1285,freq=2.0), product of:
              0.2374559 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05208721 = queryNorm
              0.30220953 = fieldWeight in 1285, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=1285)
        0.25 = coord(1/4)
      0.025072116 = product of:
        0.050144233 = sum of:
          0.050144233 = weight(_text_:w in 1285) [ClassicSimilarity], result of:
            0.050144233 = score(doc=1285,freq=2.0), product of:
              0.19849424 = queryWeight, product of:
                3.8108058 = idf(docFreq=2659, maxDocs=44218)
                0.05208721 = queryNorm
              0.2526231 = fieldWeight in 1285, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8108058 = idf(docFreq=2659, maxDocs=44218)
                0.046875 = fieldNorm(doc=1285)
        0.5 = coord(1/2)
    
  2. Zeng, M.L.; Fan, W.; Lin, X.: SKOS for an integrated vocabulary structure (2008) 0.04
    0.03667523 = product of:
      0.07335046 = sum of:
        0.07335046 = sum of:
          0.03342949 = weight(_text_:w in 2654) [ClassicSimilarity], result of:
            0.03342949 = score(doc=2654,freq=2.0), product of:
              0.19849424 = queryWeight, product of:
                3.8108058 = idf(docFreq=2659, maxDocs=44218)
                0.05208721 = queryNorm
              0.16841541 = fieldWeight in 2654, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8108058 = idf(docFreq=2659, maxDocs=44218)
                0.03125 = fieldNorm(doc=2654)
          0.039920975 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
            0.039920975 = score(doc=2654,freq=4.0), product of:
              0.18240054 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05208721 = queryNorm
              0.21886435 = fieldWeight in 2654, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2654)
      0.5 = coord(1/2)
    
    Abstract
    In order to transfer the Chinese Classified Thesaurus (CCT) into a machine-processable format and provide CCT-based Web services, a pilot study has been conducted in which a variety of selected CCT classes and mapped thesaurus entries are encoded with SKOS. OWL and RDFS are also used to encode the same contents for the purposes of feasibility and cost-benefit comparison. CCT is a collected effort led by the National Library of China. It is an integration of the national standards Chinese Library Classification (CLC) 4th edition and Chinese Thesaurus (CT). As a manually created mapping product, CCT provides for each of the classes the corresponding thesaurus terms, and vice versa. The coverage of CCT includes four major clusters: philosophy, social sciences and humanities, natural sciences and technologies, and general works. There are 22 main-classes, 52,992 sub-classes and divisions, 110,837 preferred thesaurus terms, 35,690 entry terms (non-preferred terms), and 59,738 pre-coordinated headings (Chinese Classified Thesaurus, 2005) Major challenges of encoding this large vocabulary comes from its integrated structure. CCT is a result of the combination of two structures (illustrated in Figure 1): a thesaurus that uses ISO-2788 standardized structure and a classification scheme that is basically enumerative, but provides some flexibility for several kinds of synthetic mechanisms Other challenges include the complex relationships caused by differences of granularities of two original schemes and their presentation with various levels of SKOS elements; as well as the diverse coordination of entries due to the use of auxiliary tables and pre-coordinated headings derived from combining classes, subdivisions, and thesaurus terms, which do not correspond to existing unique identifiers. The poster reports the progress, shares the sample SKOS entries, and summarizes problems identified during the SKOS encoding process. Although OWL Lite and OWL Full provide richer expressiveness, the cost-benefit issues and the final purposes of encoding CCT raise questions of using such approaches.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  3. Zhitomirsky-Geffet, M.; Bar-Ilan, J.: Towards maximal unification of semantically diverse ontologies for controversial domains (2014) 0.03
    0.03482994 = sum of:
      0.020715743 = product of:
        0.08286297 = sum of:
          0.08286297 = weight(_text_:authors in 1634) [ClassicSimilarity], result of:
            0.08286297 = score(doc=1634,freq=6.0), product of:
              0.2374559 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05208721 = queryNorm
              0.34896153 = fieldWeight in 1634, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
        0.25 = coord(1/4)
      0.014114196 = product of:
        0.028228393 = sum of:
          0.028228393 = weight(_text_:22 in 1634) [ClassicSimilarity], result of:
            0.028228393 = score(doc=1634,freq=2.0), product of:
              0.18240054 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05208721 = queryNorm
              0.15476047 = fieldWeight in 1634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
        0.5 = coord(1/2)
    
    Abstract
    Purpose - Ontologies are prone to wide semantic variability due to subjective points of view of their composers. The purpose of this paper is to propose a new approach for maximal unification of diverse ontologies for controversial domains by their relations. Design/methodology/approach - Effective matching or unification of multiple ontologies for a specific domain is crucial for the success of many semantic web applications, such as semantic information retrieval and organization, document tagging, summarization and search. To this end, numerous automatic and semi-automatic techniques were proposed in the past decade that attempt to identify similar entities, mostly classes, in diverse ontologies for similar domains. Apparently, matching individual entities cannot result in full integration of ontologies' semantics without matching their inter-relations with all other-related classes (and instances). However, semantic matching of ontological relations still constitutes a major research challenge. Therefore, in this paper the authors propose a new paradigm for assessment of maximal possible matching and unification of ontological relations. To this end, several unification rules for ontological relations were devised based on ontological reference rules, and lexical and textual entailment. These rules were semi-automatically implemented to extend a given ontology with semantically matching relations from another ontology for a similar domain. Then, the ontologies were unified through these similar pairs of relations. The authors observe that these rules can be also facilitated to reveal the contradictory relations in different ontologies. Findings - To assess the feasibility of the approach two experiments were conducted with different sets of multiple personal ontologies on controversial domains constructed by trained subjects. The results for about 50 distinct ontology pairs demonstrate a good potential of the methodology for increasing inter-ontology agreement. Furthermore, the authors show that the presented methodology can lead to a complete unification of multiple semantically heterogeneous ontologies. Research limitations/implications - This is a conceptual study that presents a new approach for semantic unification of ontologies by a devised set of rules along with the initial experimental evidence of its feasibility and effectiveness. However, this methodology has to be fully automatically implemented and tested on a larger dataset in future research. Practical implications - This result has implication for semantic search, since a richer ontology, comprised of multiple aspects and viewpoints of the domain of knowledge, enhances discoverability and improves search results. Originality/value - To the best of the knowledge, this is the first study to examine and assess the maximal level of semantic relation-based ontology unification.
    Date
    20. 1.2015 18:30:22
  4. Brunetti, J.M.; Roberto García, R.: User-centered design and evaluation of overview components for semantic data exploration (2014) 0.03
    0.026074436 = sum of:
      0.011960239 = product of:
        0.047840957 = sum of:
          0.047840957 = weight(_text_:authors in 1626) [ClassicSimilarity], result of:
            0.047840957 = score(doc=1626,freq=2.0), product of:
              0.2374559 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05208721 = queryNorm
              0.20147301 = fieldWeight in 1626, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.03125 = fieldNorm(doc=1626)
        0.25 = coord(1/4)
      0.014114196 = product of:
        0.028228393 = sum of:
          0.028228393 = weight(_text_:22 in 1626) [ClassicSimilarity], result of:
            0.028228393 = score(doc=1626,freq=2.0), product of:
              0.18240054 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05208721 = queryNorm
              0.15476047 = fieldWeight in 1626, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1626)
        0.5 = coord(1/2)
    
    Abstract
    Purpose - The growing volumes of semantic data available in the web result in the need for handling the information overload phenomenon. The potential of this amount of data is enormous but in most cases it is very difficult for users to visualize, explore and use this data, especially for lay-users without experience with Semantic Web technologies. The paper aims to discuss these issues. Design/methodology/approach - The Visual Information-Seeking Mantra "Overview first, zoom and filter, then details-on-demand" proposed by Shneiderman describes how data should be presented in different stages to achieve an effective exploration. The overview is the first user task when dealing with a data set. The objective is that the user is capable of getting an idea about the overall structure of the data set. Different information architecture (IA) components supporting the overview tasks have been developed, so they are automatically generated from semantic data, and evaluated with end-users. Findings - The chosen IA components are well known to web users, as they are present in most web pages: navigation bars, site maps and site indexes. The authors complement them with Treemaps, a visualization technique for displaying hierarchical data. These components have been developed following an iterative User-Centered Design methodology. Evaluations with end-users have shown that they get easily used to them despite the fact that they are generated automatically from structured data, without requiring knowledge about the underlying semantic technologies, and that the different overview components complement each other as they focus on different information search needs. Originality/value - Obtaining semantic data sets overviews cannot be easily done with the current semantic web browsers. Overviews become difficult to achieve with large heterogeneous data sets, which is typical in the Semantic Web, because traditional IA techniques do not easily scale to large data sets. There is little or no support to obtain overview information quickly and easily at the beginning of the exploration of a new data set. This can be a serious limitation when exploring a data set for the first time, especially for lay-users. The proposal is to reuse and adapt existing IA components to provide this overview to users and show that they can be generated automatically from the thesaurus and ontologies that structure semantic data while providing a comparable user experience to traditional web sites.
    Date
    20. 1.2015 18:30:22
  5. Hitzler, P.; Krötzsch, M.; Rudolph, S.; Sure, Y.: Semantic Web : Grundlagen (2008) 0.03
    0.025589122 = product of:
      0.051178243 = sum of:
        0.051178243 = product of:
          0.102356486 = sum of:
            0.102356486 = weight(_text_:w in 358) [ClassicSimilarity], result of:
              0.102356486 = score(doc=358,freq=12.0), product of:
                0.19849424 = queryWeight, product of:
                  3.8108058 = idf(docFreq=2659, maxDocs=44218)
                  0.05208721 = queryNorm
                0.51566476 = fieldWeight in 358, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.8108058 = idf(docFreq=2659, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=358)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Classification
    TVB (W)
    TVP (W)
    TYD (W)
    GHBS
    TVB (W)
    TVP (W)
    TYD (W)
  6. Ulrich, W.: Simple Knowledge Organisation System (2007) 0.03
    0.025072116 = product of:
      0.050144233 = sum of:
        0.050144233 = product of:
          0.100288466 = sum of:
            0.100288466 = weight(_text_:w in 105) [ClassicSimilarity], result of:
              0.100288466 = score(doc=105,freq=2.0), product of:
                0.19849424 = queryWeight, product of:
                  3.8108058 = idf(docFreq=2659, maxDocs=44218)
                  0.05208721 = queryNorm
                0.5052462 = fieldWeight in 105, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8108058 = idf(docFreq=2659, maxDocs=44218)
                  0.09375 = fieldNorm(doc=105)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  7. Dextre Clarke, S.G.: Challenges and opportunities for KOS standards (2007) 0.02
    0.024699843 = product of:
      0.049399685 = sum of:
        0.049399685 = product of:
          0.09879937 = sum of:
            0.09879937 = weight(_text_:22 in 4643) [ClassicSimilarity], result of:
              0.09879937 = score(doc=4643,freq=2.0), product of:
                0.18240054 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05208721 = queryNorm
                0.5416616 = fieldWeight in 4643, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4643)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.2007 15:41:14
  8. Multimedia content and the Semantic Web : methods, standards, and tools (2005) 0.02
    0.022754217 = sum of:
      0.00747515 = product of:
        0.0299006 = sum of:
          0.0299006 = weight(_text_:authors in 150) [ClassicSimilarity], result of:
            0.0299006 = score(doc=150,freq=2.0), product of:
              0.2374559 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05208721 = queryNorm
              0.12592064 = fieldWeight in 150, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.01953125 = fieldNorm(doc=150)
        0.25 = coord(1/4)
      0.015279067 = product of:
        0.030558133 = sum of:
          0.030558133 = weight(_text_:22 in 150) [ClassicSimilarity], result of:
            0.030558133 = score(doc=150,freq=6.0), product of:
              0.18240054 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05208721 = queryNorm
              0.16753313 = fieldWeight in 150, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.01953125 = fieldNorm(doc=150)
        0.5 = coord(1/2)
    
    Classification
    006.7 22
    Date
    7. 3.2007 19:30:22
    DDC
    006.7 22
    Footnote
    Rez. in: JASIST 58(2007) no.3, S.457-458 (A.M.A. Ahmad): "The concept of the semantic web has emerged because search engines and text-based searching are no longer adequate, as these approaches involve an extensive information retrieval process. The deployed searching and retrieving descriptors arc naturally subjective and their deployment is often restricted to the specific application domain for which the descriptors were configured. The new era of information technology imposes different kinds of requirements and challenges. Automatic extracted audiovisual features are required, as these features are more objective, domain-independent, and more native to audiovisual content. This book is a useful guide for researchers, experts, students, and practitioners; it is a very valuable reference and can lead them through their exploration and research in multimedia content and the semantic web. The book is well organized, and introduces the concept of the semantic web and multimedia content analysis to the reader through a logical sequence from standards and hypotheses through system examples, presenting relevant tools and methods. But in some chapters readers will need a good technical background to understand some of the details. Readers may attain sufficient knowledge here to start projects or research related to the book's theme; recent results and articles related to the active research area of integrating multimedia with semantic web technologies are included. This book includes full descriptions of approaches to specific problem domains such as content search, indexing, and retrieval. This book will be very useful to researchers in the multimedia content analysis field who wish to explore the benefits of emerging semantic web technologies in applying multimedia content approaches. The first part of the book covers the definition of the two basic terms multimedia content and semantic web. The Moving Picture Experts Group standards MPEG7 and MPEG21 are quoted extensively. In addition, the means of multimedia content description are elaborated upon and schematically drawn. This extensive description is introduced by authors who are actively involved in those standards and have been participating in the work of the International Organization for Standardization (ISO)/MPEG for many years. On the other hand, this results in bias against the ad hoc or nonstandard tools for multimedia description in favor of the standard approaches. This is a general book for multimedia content; more emphasis on the general multimedia description and extraction could be provided.
  9. Broughton, V.: Automatic metadata generation : Digital resource description without human intervention (2007) 0.02
    0.021171294 = product of:
      0.04234259 = sum of:
        0.04234259 = product of:
          0.08468518 = sum of:
            0.08468518 = weight(_text_:22 in 6048) [ClassicSimilarity], result of:
              0.08468518 = score(doc=6048,freq=2.0), product of:
                0.18240054 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05208721 = queryNorm
                0.46428138 = fieldWeight in 6048, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6048)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.2007 15:41:14
  10. Tudhope, D.: Knowledge Organization System Services : brief review of NKOS activities and possibility of KOS registries (2007) 0.02
    0.021171294 = product of:
      0.04234259 = sum of:
        0.04234259 = product of:
          0.08468518 = sum of:
            0.08468518 = weight(_text_:22 in 100) [ClassicSimilarity], result of:
              0.08468518 = score(doc=100,freq=2.0), product of:
                0.18240054 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05208721 = queryNorm
                0.46428138 = fieldWeight in 100, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=100)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.2007 15:41:14
  11. Gradmann, S.: Semantic Web und Linked Open Data (2013) 0.02
    0.02089343 = product of:
      0.04178686 = sum of:
        0.04178686 = product of:
          0.08357372 = sum of:
            0.08357372 = weight(_text_:w in 716) [ClassicSimilarity], result of:
              0.08357372 = score(doc=716,freq=2.0), product of:
                0.19849424 = queryWeight, product of:
                  3.8108058 = idf(docFreq=2659, maxDocs=44218)
                  0.05208721 = queryNorm
                0.42103854 = fieldWeight in 716, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8108058 = idf(docFreq=2659, maxDocs=44218)
                  0.078125 = fieldNorm(doc=716)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Grundlagen der praktischen Information und Dokumentation. Handbuch zur Einführung in die Informationswissenschaft und -praxis. 6., völlig neu gefaßte Ausgabe. Hrsg. von R. Kuhlen, W. Semar u. D. Strauch. Begründet von Klaus Laisiepen, Ernst Lutterbeck, Karl-Heinrich Meyer-Uhlenried
  12. Danowski, P.; Goldfarb, D.; Schaffner, V.; Seidler, W.: Linked (Open) Data - Bibliographische Daten im Semantic Web : Bericht der AG Linked Data an die Verbundvollversammlung (16. Mai 2013) (2013) 0.02
    0.02089343 = product of:
      0.04178686 = sum of:
        0.04178686 = product of:
          0.08357372 = sum of:
            0.08357372 = weight(_text_:w in 814) [ClassicSimilarity], result of:
              0.08357372 = score(doc=814,freq=2.0), product of:
                0.19849424 = queryWeight, product of:
                  3.8108058 = idf(docFreq=2659, maxDocs=44218)
                  0.05208721 = queryNorm
                0.42103854 = fieldWeight in 814, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8108058 = idf(docFreq=2659, maxDocs=44218)
                  0.078125 = fieldNorm(doc=814)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  13. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.02
    0.020682074 = product of:
      0.04136415 = sum of:
        0.04136415 = product of:
          0.1654566 = sum of:
            0.1654566 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.1654566 = score(doc=701,freq=2.0), product of:
                0.44159594 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05208721 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  14. Heflin, J.; Hendler, J.: ¬A portrait of the Semantic Web in action (2001) 0.02
    0.018126275 = product of:
      0.03625255 = sum of:
        0.03625255 = product of:
          0.1450102 = sum of:
            0.1450102 = weight(_text_:authors in 2547) [ClassicSimilarity], result of:
              0.1450102 = score(doc=2547,freq=6.0), product of:
                0.2374559 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.05208721 = queryNorm
                0.61068267 = fieldWeight in 2547, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2547)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    Without semantically enriched content, the Web cannot reach its full potential. The authors discuss tools and techniques for generating and processing such content, thus setting a foundation upon which to build the Semantic Web. In particular, they put a Semantic Web language through its paces and try to answer questions about how people can use it, such as, How do authors generate semantic descriptions? How do agents discover these descriptions? How can agents integrate information from different sites? How can users query the Semantic Web? The authors present a system that addresses these questions and describe tools that help users interact with the Semantic Web. They motivate the design of their system with a specific application: semantic markup for computer science.
  15. Papadakis, I. et al.: Highlighting timely information in libraries through social and semantic Web technologies (2016) 0.02
    0.017642746 = product of:
      0.03528549 = sum of:
        0.03528549 = product of:
          0.07057098 = sum of:
            0.07057098 = weight(_text_:22 in 2090) [ClassicSimilarity], result of:
              0.07057098 = score(doc=2090,freq=2.0), product of:
                0.18240054 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05208721 = queryNorm
                0.38690117 = fieldWeight in 2090, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2090)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  16. May, W.: Reasoning im und für das Semantic Web (2006) 0.02
    0.016714744 = product of:
      0.03342949 = sum of:
        0.03342949 = product of:
          0.06685898 = sum of:
            0.06685898 = weight(_text_:w in 5812) [ClassicSimilarity], result of:
              0.06685898 = score(doc=5812,freq=2.0), product of:
                0.19849424 = queryWeight, product of:
                  3.8108058 = idf(docFreq=2659, maxDocs=44218)
                  0.05208721 = queryNorm
                0.33683082 = fieldWeight in 5812, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8108058 = idf(docFreq=2659, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5812)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  17. Handbook on ontologies (2004) 0.01
    0.014773888 = product of:
      0.029547775 = sum of:
        0.029547775 = product of:
          0.05909555 = sum of:
            0.05909555 = weight(_text_:w in 1952) [ClassicSimilarity], result of:
              0.05909555 = score(doc=1952,freq=4.0), product of:
                0.19849424 = queryWeight, product of:
                  3.8108058 = idf(docFreq=2659, maxDocs=44218)
                  0.05208721 = queryNorm
                0.2977192 = fieldWeight in 1952, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8108058 = idf(docFreq=2659, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1952)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Classification
    TVB (W)
    GHBS
    TVB (W)
  18. Zenz, G.; Zhou, X.; Minack, E.; Siberski, W.; Nejdl, W.: Interactive query construction for keyword search on the Semantic Web (2012) 0.01
    0.014773888 = product of:
      0.029547775 = sum of:
        0.029547775 = product of:
          0.05909555 = sum of:
            0.05909555 = weight(_text_:w in 430) [ClassicSimilarity], result of:
              0.05909555 = score(doc=430,freq=4.0), product of:
                0.19849424 = queryWeight, product of:
                  3.8108058 = idf(docFreq=2659, maxDocs=44218)
                  0.05208721 = queryNorm
                0.2977192 = fieldWeight in 430, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8108058 = idf(docFreq=2659, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=430)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  19. O'Hara, K.; Hall, W.: Semantic Web (2009) 0.01
    0.014625401 = product of:
      0.029250802 = sum of:
        0.029250802 = product of:
          0.058501605 = sum of:
            0.058501605 = weight(_text_:w in 3871) [ClassicSimilarity], result of:
              0.058501605 = score(doc=3871,freq=2.0), product of:
                0.19849424 = queryWeight, product of:
                  3.8108058 = idf(docFreq=2659, maxDocs=44218)
                  0.05208721 = queryNorm
                0.29472697 = fieldWeight in 3871, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8108058 = idf(docFreq=2659, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3871)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  20. Wenige, L.: ¬The application of linked data resources for library recommender systems (2017) 0.01
    0.014625401 = product of:
      0.029250802 = sum of:
        0.029250802 = product of:
          0.058501605 = sum of:
            0.058501605 = weight(_text_:w in 3500) [ClassicSimilarity], result of:
              0.058501605 = score(doc=3500,freq=2.0), product of:
                0.19849424 = queryWeight, product of:
                  3.8108058 = idf(docFreq=2659, maxDocs=44218)
                  0.05208721 = queryNorm
                0.29472697 = fieldWeight in 3500, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8108058 = idf(docFreq=2659, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3500)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Theorie, Semantik und Organisation von Wissen: Proceedings der 13. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und dem 13. Internationalen Symposium der Informationswissenschaft der Higher Education Association for Information Science (HI) Potsdam (19.-20.03.2013): 'Theory, Information and Organization of Knowledge' / Proceedings der 14. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und Natural Language & Information Systems (NLDB) Passau (16.06.2015): 'Lexical Resources for Knowledge Organization' / Proceedings des Workshops der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) auf der SEMANTICS Leipzig (1.09.2014): 'Knowledge Organization and Semantic Web' / Proceedings des Workshops der Polnischen und Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) Cottbus (29.-30.09.2011): 'Economics of Knowledge Production and Organization'. Hrsg. von W. Babik, H.P. Ohly u. K. Weber

Authors

Languages

  • e 54
  • d 15

Types

  • a 44
  • el 18
  • m 11
  • s 5
  • n 1
  • x 1
  • More… Less…