Search (86 results, page 1 of 5)

  • × theme_ss:"Semantic Web"
  1. Faaborg, A.; Lagoze, C.: Semantic browsing (2003) 0.05
    0.048839826 = product of:
      0.09767965 = sum of:
        0.09767965 = sum of:
          0.04810319 = weight(_text_:c in 1026) [ClassicSimilarity], result of:
            0.04810319 = score(doc=1026,freq=2.0), product of:
              0.18031284 = queryWeight, product of:
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.052273605 = queryNorm
              0.2667763 = fieldWeight in 1026, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1026)
          0.04957646 = weight(_text_:22 in 1026) [ClassicSimilarity], result of:
            0.04957646 = score(doc=1026,freq=2.0), product of:
              0.18305326 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052273605 = queryNorm
              0.2708308 = fieldWeight in 1026, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1026)
      0.5 = coord(1/2)
    
    Source
    Research and advanced technology for digital libraries : 7th European Conference, proceedings / ECDL 2003, Trondheim, Norway, August 17-22, 2003
  2. Bianchini, C.; Willer, M.: ISBD resource and Its description in the context of the Semantic Web (2014) 0.05
    0.045056913 = sum of:
      0.021005318 = product of:
        0.08402127 = sum of:
          0.08402127 = weight(_text_:authors in 1998) [ClassicSimilarity], result of:
            0.08402127 = score(doc=1998,freq=2.0), product of:
              0.23830564 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052273605 = queryNorm
              0.35257778 = fieldWeight in 1998, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1998)
        0.25 = coord(1/4)
      0.024051595 = product of:
        0.04810319 = sum of:
          0.04810319 = weight(_text_:c in 1998) [ClassicSimilarity], result of:
            0.04810319 = score(doc=1998,freq=2.0), product of:
              0.18031284 = queryWeight, product of:
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.052273605 = queryNorm
              0.2667763 = fieldWeight in 1998, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1998)
        0.5 = coord(1/2)
    
    Abstract
    This article explores the question "What is an International Standard for Bibliographic Description (ISBD) resource in the context of the Semantic Web, and what is the relationship of its description to the linked data?" This question is discussed against the background of the dichotomy between the description and access using the Semantic Web differentiation of the three logical layers: real-world objects, web of data, and special purpose (bibliographic) data. The representation of bibliographic data as linked data is discussed, distinguishing the description of a resource from the iconic/objective and the informational/subjective viewpoints. In the conclusion, the authors give views on possible directions of future development of the ISBD.
  3. Auer, S.; Bizer, C.; Kobilarov, G.; Lehmann, J.; Cyganiak, R.; Ives, Z.: DBpedia: a nucleus for a Web of open data (2007) 0.04
    0.038620215 = sum of:
      0.018004559 = product of:
        0.072018236 = sum of:
          0.072018236 = weight(_text_:authors in 4260) [ClassicSimilarity], result of:
            0.072018236 = score(doc=4260,freq=2.0), product of:
              0.23830564 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052273605 = queryNorm
              0.30220953 = fieldWeight in 4260, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=4260)
        0.25 = coord(1/4)
      0.020615656 = product of:
        0.04123131 = sum of:
          0.04123131 = weight(_text_:c in 4260) [ClassicSimilarity], result of:
            0.04123131 = score(doc=4260,freq=2.0), product of:
              0.18031284 = queryWeight, product of:
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.052273605 = queryNorm
              0.22866541 = fieldWeight in 4260, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.046875 = fieldNorm(doc=4260)
        0.5 = coord(1/2)
    
    Abstract
    DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against datasets derived from Wikipedia and to link other datasets on the Web to Wikipedia data. We describe the extraction of the DBpedia datasets, and how the resulting information is published on the Web for human- and machineconsumption. We describe some emerging applications from the DBpedia community and show how website authors can facilitate DBpedia content within their sites. Finally, we present the current status of interlinking DBpedia with other open datasets on the Web and outline how DBpedia could serve as a nucleus for an emerging Web of open data.
  4. McCathieNevile, C.; Méndez Rodríguez, E.M.: Library cards for the 21st century (2006) 0.04
    0.038620215 = sum of:
      0.018004559 = product of:
        0.072018236 = sum of:
          0.072018236 = weight(_text_:authors in 240) [ClassicSimilarity], result of:
            0.072018236 = score(doc=240,freq=2.0), product of:
              0.23830564 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052273605 = queryNorm
              0.30220953 = fieldWeight in 240, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=240)
        0.25 = coord(1/4)
      0.020615656 = product of:
        0.04123131 = sum of:
          0.04123131 = weight(_text_:c in 240) [ClassicSimilarity], result of:
            0.04123131 = score(doc=240,freq=2.0), product of:
              0.18031284 = queryWeight, product of:
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.052273605 = queryNorm
              0.22866541 = fieldWeight in 240, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.046875 = fieldNorm(doc=240)
        0.5 = coord(1/2)
    
    Abstract
    This paper presents several reflections on the traditional card catalogues and RDF (Resource Description Framework), which is "the" standard for creating the Semantic Web. This work grew out of discussion between the authors after Working Group on Metadata Schemes meeting held at IFLA conference in Buenos Aires (2004). The paper provides an overview of RDF from the perspective of cataloguers, catalogues and library cards. The central theme is the discussion of resource description as a discipline that could be based on RDF. RDF is explained as a very simple grammar, using metadata and ontologies to semantic search and access. RDF Knitting the Semantic Web Cataloging & Classification Quarterly Volume 43, Numbers 3/4 has the ability to enhance 21st century libraries and metadata interoperability in digital libraries, while maintaining the expressive power that was available to librarians when catalogues were physical artefacts.
  5. Zhitomirsky-Geffet, M.; Bar-Ilan, J.: Towards maximal unification of semantically diverse ontologies for controversial domains (2014) 0.03
    0.034954578 = sum of:
      0.020789875 = product of:
        0.0831595 = sum of:
          0.0831595 = weight(_text_:authors in 1634) [ClassicSimilarity], result of:
            0.0831595 = score(doc=1634,freq=6.0), product of:
              0.23830564 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052273605 = queryNorm
              0.34896153 = fieldWeight in 1634, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
        0.25 = coord(1/4)
      0.014164704 = product of:
        0.028329408 = sum of:
          0.028329408 = weight(_text_:22 in 1634) [ClassicSimilarity], result of:
            0.028329408 = score(doc=1634,freq=2.0), product of:
              0.18305326 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052273605 = queryNorm
              0.15476047 = fieldWeight in 1634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
        0.5 = coord(1/2)
    
    Abstract
    Purpose - Ontologies are prone to wide semantic variability due to subjective points of view of their composers. The purpose of this paper is to propose a new approach for maximal unification of diverse ontologies for controversial domains by their relations. Design/methodology/approach - Effective matching or unification of multiple ontologies for a specific domain is crucial for the success of many semantic web applications, such as semantic information retrieval and organization, document tagging, summarization and search. To this end, numerous automatic and semi-automatic techniques were proposed in the past decade that attempt to identify similar entities, mostly classes, in diverse ontologies for similar domains. Apparently, matching individual entities cannot result in full integration of ontologies' semantics without matching their inter-relations with all other-related classes (and instances). However, semantic matching of ontological relations still constitutes a major research challenge. Therefore, in this paper the authors propose a new paradigm for assessment of maximal possible matching and unification of ontological relations. To this end, several unification rules for ontological relations were devised based on ontological reference rules, and lexical and textual entailment. These rules were semi-automatically implemented to extend a given ontology with semantically matching relations from another ontology for a similar domain. Then, the ontologies were unified through these similar pairs of relations. The authors observe that these rules can be also facilitated to reveal the contradictory relations in different ontologies. Findings - To assess the feasibility of the approach two experiments were conducted with different sets of multiple personal ontologies on controversial domains constructed by trained subjects. The results for about 50 distinct ontology pairs demonstrate a good potential of the methodology for increasing inter-ontology agreement. Furthermore, the authors show that the presented methodology can lead to a complete unification of multiple semantically heterogeneous ontologies. Research limitations/implications - This is a conceptual study that presents a new approach for semantic unification of ontologies by a devised set of rules along with the initial experimental evidence of its feasibility and effectiveness. However, this methodology has to be fully automatically implemented and tested on a larger dataset in future research. Practical implications - This result has implication for semantic search, since a richer ontology, comprised of multiple aspects and viewpoints of the domain of knowledge, enhances discoverability and improves search results. Originality/value - To the best of the knowledge, this is the first study to examine and assess the maximal level of semantic relation-based ontology unification.
    Date
    20. 1.2015 18:30:22
  6. Lassalle, E.; Lassalle, E.: Semantic models in information retrieval (2012) 0.03
    0.03218351 = sum of:
      0.015003799 = product of:
        0.060015198 = sum of:
          0.060015198 = weight(_text_:authors in 97) [ClassicSimilarity], result of:
            0.060015198 = score(doc=97,freq=2.0), product of:
              0.23830564 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052273605 = queryNorm
              0.25184128 = fieldWeight in 97, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=97)
        0.25 = coord(1/4)
      0.01717971 = product of:
        0.03435942 = sum of:
          0.03435942 = weight(_text_:c in 97) [ClassicSimilarity], result of:
            0.03435942 = score(doc=97,freq=2.0), product of:
              0.18031284 = queryWeight, product of:
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.052273605 = queryNorm
              0.1905545 = fieldWeight in 97, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.0390625 = fieldNorm(doc=97)
        0.5 = coord(1/2)
    
    Abstract
    Robertson and Spärck Jones pioneered experimental probabilistic models (Binary Independence Model) with both a typology generalizing the Boolean model, a frequency counting to calculate elementary weightings, and their combination into a global probabilistic estimation. However, this model did not consider indexing terms dependencies. An extension to mixture models (e.g., using a 2-Poisson law) made it possible to take into account these dependencies from a macroscopic point of view (BM25), as well as a shallow linguistic processing of co-references. New approaches (language models, for example "bag of words" models, probabilistic dependencies between requests and documents, and consequently Bayesian inference using Dirichlet prior conjugate) furnished new solutions for documents structuring (categorization) and for index smoothing. Presently, in these probabilistic models the main issues have been addressed from a formal point of view only. Thus, linguistic properties are neglected in the indexing language. The authors examine how a linguistic and semantic modeling can be integrated in indexing languages and set up a hybrid model that makes it possible to deal with different information retrieval problems in a unified way.
    Source
    Next generation search engines: advanced models for information retrieval. Eds.: C. Jouis, u.a
  7. Brunetti, J.M.; Roberto García, R.: User-centered design and evaluation of overview components for semantic data exploration (2014) 0.03
    0.026167743 = sum of:
      0.012003039 = product of:
        0.048012156 = sum of:
          0.048012156 = weight(_text_:authors in 1626) [ClassicSimilarity], result of:
            0.048012156 = score(doc=1626,freq=2.0), product of:
              0.23830564 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052273605 = queryNorm
              0.20147301 = fieldWeight in 1626, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.03125 = fieldNorm(doc=1626)
        0.25 = coord(1/4)
      0.014164704 = product of:
        0.028329408 = sum of:
          0.028329408 = weight(_text_:22 in 1626) [ClassicSimilarity], result of:
            0.028329408 = score(doc=1626,freq=2.0), product of:
              0.18305326 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052273605 = queryNorm
              0.15476047 = fieldWeight in 1626, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1626)
        0.5 = coord(1/2)
    
    Abstract
    Purpose - The growing volumes of semantic data available in the web result in the need for handling the information overload phenomenon. The potential of this amount of data is enormous but in most cases it is very difficult for users to visualize, explore and use this data, especially for lay-users without experience with Semantic Web technologies. The paper aims to discuss these issues. Design/methodology/approach - The Visual Information-Seeking Mantra "Overview first, zoom and filter, then details-on-demand" proposed by Shneiderman describes how data should be presented in different stages to achieve an effective exploration. The overview is the first user task when dealing with a data set. The objective is that the user is capable of getting an idea about the overall structure of the data set. Different information architecture (IA) components supporting the overview tasks have been developed, so they are automatically generated from semantic data, and evaluated with end-users. Findings - The chosen IA components are well known to web users, as they are present in most web pages: navigation bars, site maps and site indexes. The authors complement them with Treemaps, a visualization technique for displaying hierarchical data. These components have been developed following an iterative User-Centered Design methodology. Evaluations with end-users have shown that they get easily used to them despite the fact that they are generated automatically from structured data, without requiring knowledge about the underlying semantic technologies, and that the different overview components complement each other as they focus on different information search needs. Originality/value - Obtaining semantic data sets overviews cannot be easily done with the current semantic web browsers. Overviews become difficult to achieve with large heterogeneous data sets, which is typical in the Semantic Web, because traditional IA techniques do not easily scale to large data sets. There is little or no support to obtain overview information quickly and easily at the beginning of the exploration of a new data set. This can be a serious limitation when exploring a data set for the first time, especially for lay-users. The proposal is to reuse and adapt existing IA components to provide this overview to users and show that they can be generated automatically from the thesaurus and ontologies that structure semantic data while providing a comparable user experience to traditional web sites.
    Date
    20. 1.2015 18:30:22
  8. Dextre Clarke, S.G.: Challenges and opportunities for KOS standards (2007) 0.02
    0.02478823 = product of:
      0.04957646 = sum of:
        0.04957646 = product of:
          0.09915292 = sum of:
            0.09915292 = weight(_text_:22 in 4643) [ClassicSimilarity], result of:
              0.09915292 = score(doc=4643,freq=2.0), product of:
                0.18305326 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052273605 = queryNorm
                0.5416616 = fieldWeight in 4643, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4643)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.2007 15:41:14
  9. Multimedia content and the Semantic Web : methods, standards, and tools (2005) 0.02
    0.022835642 = sum of:
      0.0075018997 = product of:
        0.030007599 = sum of:
          0.030007599 = weight(_text_:authors in 150) [ClassicSimilarity], result of:
            0.030007599 = score(doc=150,freq=2.0), product of:
              0.23830564 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052273605 = queryNorm
              0.12592064 = fieldWeight in 150, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.01953125 = fieldNorm(doc=150)
        0.25 = coord(1/4)
      0.015333742 = product of:
        0.030667484 = sum of:
          0.030667484 = weight(_text_:22 in 150) [ClassicSimilarity], result of:
            0.030667484 = score(doc=150,freq=6.0), product of:
              0.18305326 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052273605 = queryNorm
              0.16753313 = fieldWeight in 150, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.01953125 = fieldNorm(doc=150)
        0.5 = coord(1/2)
    
    Classification
    006.7 22
    Date
    7. 3.2007 19:30:22
    DDC
    006.7 22
    Footnote
    Rez. in: JASIST 58(2007) no.3, S.457-458 (A.M.A. Ahmad): "The concept of the semantic web has emerged because search engines and text-based searching are no longer adequate, as these approaches involve an extensive information retrieval process. The deployed searching and retrieving descriptors arc naturally subjective and their deployment is often restricted to the specific application domain for which the descriptors were configured. The new era of information technology imposes different kinds of requirements and challenges. Automatic extracted audiovisual features are required, as these features are more objective, domain-independent, and more native to audiovisual content. This book is a useful guide for researchers, experts, students, and practitioners; it is a very valuable reference and can lead them through their exploration and research in multimedia content and the semantic web. The book is well organized, and introduces the concept of the semantic web and multimedia content analysis to the reader through a logical sequence from standards and hypotheses through system examples, presenting relevant tools and methods. But in some chapters readers will need a good technical background to understand some of the details. Readers may attain sufficient knowledge here to start projects or research related to the book's theme; recent results and articles related to the active research area of integrating multimedia with semantic web technologies are included. This book includes full descriptions of approaches to specific problem domains such as content search, indexing, and retrieval. This book will be very useful to researchers in the multimedia content analysis field who wish to explore the benefits of emerging semantic web technologies in applying multimedia content approaches. The first part of the book covers the definition of the two basic terms multimedia content and semantic web. The Moving Picture Experts Group standards MPEG7 and MPEG21 are quoted extensively. In addition, the means of multimedia content description are elaborated upon and schematically drawn. This extensive description is introduced by authors who are actively involved in those standards and have been participating in the work of the International Organization for Standardization (ISO)/MPEG for many years. On the other hand, this results in bias against the ad hoc or nonstandard tools for multimedia description in favor of the standard approaches. This is a general book for multimedia content; more emphasis on the general multimedia description and extraction could be provided.
  10. Broughton, V.: Automatic metadata generation : Digital resource description without human intervention (2007) 0.02
    0.021247055 = product of:
      0.04249411 = sum of:
        0.04249411 = product of:
          0.08498822 = sum of:
            0.08498822 = weight(_text_:22 in 6048) [ClassicSimilarity], result of:
              0.08498822 = score(doc=6048,freq=2.0), product of:
                0.18305326 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052273605 = queryNorm
                0.46428138 = fieldWeight in 6048, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6048)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.2007 15:41:14
  11. Tudhope, D.: Knowledge Organization System Services : brief review of NKOS activities and possibility of KOS registries (2007) 0.02
    0.021247055 = product of:
      0.04249411 = sum of:
        0.04249411 = product of:
          0.08498822 = sum of:
            0.08498822 = weight(_text_:22 in 100) [ClassicSimilarity], result of:
              0.08498822 = score(doc=100,freq=2.0), product of:
                0.18305326 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052273605 = queryNorm
                0.46428138 = fieldWeight in 100, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=100)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.2007 15:41:14
  12. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.02
    0.020756086 = product of:
      0.041512173 = sum of:
        0.041512173 = product of:
          0.16604869 = sum of:
            0.16604869 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.16604869 = score(doc=701,freq=2.0), product of:
                0.4431762 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.052273605 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  13. Heflin, J.; Hendler, J.: ¬A portrait of the Semantic Web in action (2001) 0.02
    0.01819114 = product of:
      0.03638228 = sum of:
        0.03638228 = product of:
          0.14552912 = sum of:
            0.14552912 = weight(_text_:authors in 2547) [ClassicSimilarity], result of:
              0.14552912 = score(doc=2547,freq=6.0), product of:
                0.23830564 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.052273605 = queryNorm
                0.61068267 = fieldWeight in 2547, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2547)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    Without semantically enriched content, the Web cannot reach its full potential. The authors discuss tools and techniques for generating and processing such content, thus setting a foundation upon which to build the Semantic Web. In particular, they put a Semantic Web language through its paces and try to answer questions about how people can use it, such as, How do authors generate semantic descriptions? How do agents discover these descriptions? How can agents integrate information from different sites? How can users query the Semantic Web? The authors present a system that addresses these questions and describe tools that help users interact with the Semantic Web. They motivate the design of their system with a specific application: semantic markup for computer science.
  14. Papadakis, I. et al.: Highlighting timely information in libraries through social and semantic Web technologies (2016) 0.02
    0.01770588 = product of:
      0.03541176 = sum of:
        0.03541176 = product of:
          0.07082352 = sum of:
            0.07082352 = weight(_text_:22 in 2090) [ClassicSimilarity], result of:
              0.07082352 = score(doc=2090,freq=2.0), product of:
                0.18305326 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052273605 = queryNorm
                0.38690117 = fieldWeight in 2090, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2090)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  15. Ziegler, C.: Smartes Chaos : Web 2.0 versus Semantic Web (2006) 0.02
    0.01717971 = product of:
      0.03435942 = sum of:
        0.03435942 = product of:
          0.06871884 = sum of:
            0.06871884 = weight(_text_:c in 4868) [ClassicSimilarity], result of:
              0.06871884 = score(doc=4868,freq=2.0), product of:
                0.18031284 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.052273605 = queryNorm
                0.381109 = fieldWeight in 4868, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4868)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  16. Synak, M.; Dabrowski, M.; Kruk, S.R.: Semantic Web and ontologies (2009) 0.01
    0.014164704 = product of:
      0.028329408 = sum of:
        0.028329408 = product of:
          0.056658816 = sum of:
            0.056658816 = weight(_text_:22 in 3376) [ClassicSimilarity], result of:
              0.056658816 = score(doc=3376,freq=2.0), product of:
                0.18305326 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052273605 = queryNorm
                0.30952093 = fieldWeight in 3376, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3376)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.2010 16:58:22
  17. Eckert, K.: SKOS: eine Sprache für die Übertragung von Thesauri ins Semantic Web (2011) 0.01
    0.014164704 = product of:
      0.028329408 = sum of:
        0.028329408 = product of:
          0.056658816 = sum of:
            0.056658816 = weight(_text_:22 in 4331) [ClassicSimilarity], result of:
              0.056658816 = score(doc=4331,freq=2.0), product of:
                0.18305326 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052273605 = queryNorm
                0.30952093 = fieldWeight in 4331, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4331)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    15. 3.2011 19:21:22
  18. OWL Web Ontology Language Test Cases (2004) 0.01
    0.014164704 = product of:
      0.028329408 = sum of:
        0.028329408 = product of:
          0.056658816 = sum of:
            0.056658816 = weight(_text_:22 in 4685) [ClassicSimilarity], result of:
              0.056658816 = score(doc=4685,freq=2.0), product of:
                0.18305326 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052273605 = queryNorm
                0.30952093 = fieldWeight in 4685, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4685)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    14. 8.2011 13:33:22
  19. Baumer, C.; Reichenberger, K.: Business Semantics - Praxis und Perspektiven (2006) 0.01
    0.013743769 = product of:
      0.027487539 = sum of:
        0.027487539 = product of:
          0.054975078 = sum of:
            0.054975078 = weight(_text_:c in 6020) [ClassicSimilarity], result of:
              0.054975078 = score(doc=6020,freq=2.0), product of:
                0.18031284 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.052273605 = queryNorm
                0.3048872 = fieldWeight in 6020, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6020)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  20. Severiens, T.; Thiemann, C.: RDF database for PhysNet and similar portals (2006) 0.01
    0.013743769 = product of:
      0.027487539 = sum of:
        0.027487539 = product of:
          0.054975078 = sum of:
            0.054975078 = weight(_text_:c in 245) [ClassicSimilarity], result of:
              0.054975078 = score(doc=245,freq=2.0), product of:
                0.18031284 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.052273605 = queryNorm
                0.3048872 = fieldWeight in 245, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0625 = fieldNorm(doc=245)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    

Languages

  • e 70
  • d 15

Types

  • a 56
  • el 23
  • m 13
  • s 6
  • n 2
  • r 2
  • x 1
  • More… Less…