Search (69 results, page 1 of 4)

  • × theme_ss:"Semantic Web"
  1. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.06
    0.061611325 = sum of:
      0.041179728 = product of:
        0.16471891 = sum of:
          0.16471891 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
            0.16471891 = score(doc=701,freq=2.0), product of:
              0.43962708 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.05185498 = queryNorm
              0.3746787 = fieldWeight in 701, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.03125 = fieldNorm(doc=701)
        0.25 = coord(1/4)
      0.020431597 = product of:
        0.040863194 = sum of:
          0.040863194 = weight(_text_:x in 701) [ClassicSimilarity], result of:
            0.040863194 = score(doc=701,freq=2.0), product of:
              0.21896711 = queryWeight, product of:
                4.2226825 = idf(docFreq=1761, maxDocs=44218)
                0.05185498 = queryNorm
              0.18661796 = fieldWeight in 701, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.2226825 = idf(docFreq=1761, maxDocs=44218)
                0.03125 = fieldNorm(doc=701)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
    Type
    x
  2. Zeng, M.L.; Fan, W.; Lin, X.: SKOS for an integrated vocabulary structure (2008) 0.04
    0.04030309 = product of:
      0.08060618 = sum of:
        0.08060618 = sum of:
          0.040863194 = weight(_text_:x in 2654) [ClassicSimilarity], result of:
            0.040863194 = score(doc=2654,freq=2.0), product of:
              0.21896711 = queryWeight, product of:
                4.2226825 = idf(docFreq=1761, maxDocs=44218)
                0.05185498 = queryNorm
              0.18661796 = fieldWeight in 2654, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.2226825 = idf(docFreq=1761, maxDocs=44218)
                0.03125 = fieldNorm(doc=2654)
          0.039742988 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
            0.039742988 = score(doc=2654,freq=4.0), product of:
              0.18158731 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05185498 = queryNorm
              0.21886435 = fieldWeight in 2654, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2654)
      0.5 = coord(1/2)
    
    Abstract
    In order to transfer the Chinese Classified Thesaurus (CCT) into a machine-processable format and provide CCT-based Web services, a pilot study has been conducted in which a variety of selected CCT classes and mapped thesaurus entries are encoded with SKOS. OWL and RDFS are also used to encode the same contents for the purposes of feasibility and cost-benefit comparison. CCT is a collected effort led by the National Library of China. It is an integration of the national standards Chinese Library Classification (CLC) 4th edition and Chinese Thesaurus (CT). As a manually created mapping product, CCT provides for each of the classes the corresponding thesaurus terms, and vice versa. The coverage of CCT includes four major clusters: philosophy, social sciences and humanities, natural sciences and technologies, and general works. There are 22 main-classes, 52,992 sub-classes and divisions, 110,837 preferred thesaurus terms, 35,690 entry terms (non-preferred terms), and 59,738 pre-coordinated headings (Chinese Classified Thesaurus, 2005) Major challenges of encoding this large vocabulary comes from its integrated structure. CCT is a result of the combination of two structures (illustrated in Figure 1): a thesaurus that uses ISO-2788 standardized structure and a classification scheme that is basically enumerative, but provides some flexibility for several kinds of synthetic mechanisms Other challenges include the complex relationships caused by differences of granularities of two original schemes and their presentation with various levels of SKOS elements; as well as the diverse coordination of entries due to the use of auxiliary tables and pre-coordinated headings derived from combining classes, subdivisions, and thesaurus terms, which do not correspond to existing unique identifiers. The poster reports the progress, shares the sample SKOS entries, and summarizes problems identified during the SKOS encoding process. Although OWL Lite and OWL Full provide richer expressiveness, the cost-benefit issues and the final purposes of encoding CCT raise questions of using such approaches.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  3. Zhitomirsky-Geffet, M.; Bar-Ilan, J.: Towards maximal unification of semantically diverse ontologies for controversial domains (2014) 0.03
    0.034674652 = sum of:
      0.020623382 = product of:
        0.08249353 = sum of:
          0.08249353 = weight(_text_:authors in 1634) [ClassicSimilarity], result of:
            0.08249353 = score(doc=1634,freq=6.0), product of:
              0.2363972 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05185498 = queryNorm
              0.34896153 = fieldWeight in 1634, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
        0.25 = coord(1/4)
      0.014051268 = product of:
        0.028102536 = sum of:
          0.028102536 = weight(_text_:22 in 1634) [ClassicSimilarity], result of:
            0.028102536 = score(doc=1634,freq=2.0), product of:
              0.18158731 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05185498 = queryNorm
              0.15476047 = fieldWeight in 1634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
        0.5 = coord(1/2)
    
    Abstract
    Purpose - Ontologies are prone to wide semantic variability due to subjective points of view of their composers. The purpose of this paper is to propose a new approach for maximal unification of diverse ontologies for controversial domains by their relations. Design/methodology/approach - Effective matching or unification of multiple ontologies for a specific domain is crucial for the success of many semantic web applications, such as semantic information retrieval and organization, document tagging, summarization and search. To this end, numerous automatic and semi-automatic techniques were proposed in the past decade that attempt to identify similar entities, mostly classes, in diverse ontologies for similar domains. Apparently, matching individual entities cannot result in full integration of ontologies' semantics without matching their inter-relations with all other-related classes (and instances). However, semantic matching of ontological relations still constitutes a major research challenge. Therefore, in this paper the authors propose a new paradigm for assessment of maximal possible matching and unification of ontological relations. To this end, several unification rules for ontological relations were devised based on ontological reference rules, and lexical and textual entailment. These rules were semi-automatically implemented to extend a given ontology with semantically matching relations from another ontology for a similar domain. Then, the ontologies were unified through these similar pairs of relations. The authors observe that these rules can be also facilitated to reveal the contradictory relations in different ontologies. Findings - To assess the feasibility of the approach two experiments were conducted with different sets of multiple personal ontologies on controversial domains constructed by trained subjects. The results for about 50 distinct ontology pairs demonstrate a good potential of the methodology for increasing inter-ontology agreement. Furthermore, the authors show that the presented methodology can lead to a complete unification of multiple semantically heterogeneous ontologies. Research limitations/implications - This is a conceptual study that presents a new approach for semantic unification of ontologies by a devised set of rules along with the initial experimental evidence of its feasibility and effectiveness. However, this methodology has to be fully automatically implemented and tested on a larger dataset in future research. Practical implications - This result has implication for semantic search, since a richer ontology, comprised of multiple aspects and viewpoints of the domain of knowledge, enhances discoverability and improves search results. Originality/value - To the best of the knowledge, this is the first study to examine and assess the maximal level of semantic relation-based ontology unification.
    Date
    20. 1.2015 18:30:22
  4. Brunetti, J.M.; Roberto García, R.: User-centered design and evaluation of overview components for semantic data exploration (2014) 0.03
    0.025958182 = sum of:
      0.011906914 = product of:
        0.047627658 = sum of:
          0.047627658 = weight(_text_:authors in 1626) [ClassicSimilarity], result of:
            0.047627658 = score(doc=1626,freq=2.0), product of:
              0.2363972 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05185498 = queryNorm
              0.20147301 = fieldWeight in 1626, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.03125 = fieldNorm(doc=1626)
        0.25 = coord(1/4)
      0.014051268 = product of:
        0.028102536 = sum of:
          0.028102536 = weight(_text_:22 in 1626) [ClassicSimilarity], result of:
            0.028102536 = score(doc=1626,freq=2.0), product of:
              0.18158731 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05185498 = queryNorm
              0.15476047 = fieldWeight in 1626, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1626)
        0.5 = coord(1/2)
    
    Abstract
    Purpose - The growing volumes of semantic data available in the web result in the need for handling the information overload phenomenon. The potential of this amount of data is enormous but in most cases it is very difficult for users to visualize, explore and use this data, especially for lay-users without experience with Semantic Web technologies. The paper aims to discuss these issues. Design/methodology/approach - The Visual Information-Seeking Mantra "Overview first, zoom and filter, then details-on-demand" proposed by Shneiderman describes how data should be presented in different stages to achieve an effective exploration. The overview is the first user task when dealing with a data set. The objective is that the user is capable of getting an idea about the overall structure of the data set. Different information architecture (IA) components supporting the overview tasks have been developed, so they are automatically generated from semantic data, and evaluated with end-users. Findings - The chosen IA components are well known to web users, as they are present in most web pages: navigation bars, site maps and site indexes. The authors complement them with Treemaps, a visualization technique for displaying hierarchical data. These components have been developed following an iterative User-Centered Design methodology. Evaluations with end-users have shown that they get easily used to them despite the fact that they are generated automatically from structured data, without requiring knowledge about the underlying semantic technologies, and that the different overview components complement each other as they focus on different information search needs. Originality/value - Obtaining semantic data sets overviews cannot be easily done with the current semantic web browsers. Overviews become difficult to achieve with large heterogeneous data sets, which is typical in the Semantic Web, because traditional IA techniques do not easily scale to large data sets. There is little or no support to obtain overview information quickly and easily at the beginning of the exploration of a new data set. This can be a serious limitation when exploring a data set for the first time, especially for lay-users. The proposal is to reuse and adapt existing IA components to provide this overview to users and show that they can be generated automatically from the thesaurus and ontologies that structure semantic data while providing a comparable user experience to traditional web sites.
    Date
    20. 1.2015 18:30:22
  5. Dextre Clarke, S.G.: Challenges and opportunities for KOS standards (2007) 0.02
    0.02458972 = product of:
      0.04917944 = sum of:
        0.04917944 = product of:
          0.09835888 = sum of:
            0.09835888 = weight(_text_:22 in 4643) [ClassicSimilarity], result of:
              0.09835888 = score(doc=4643,freq=2.0), product of:
                0.18158731 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05185498 = queryNorm
                0.5416616 = fieldWeight in 4643, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4643)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.2007 15:41:14
  6. Staab, S.: ¬Der Weg ins "Semantic Web" ist ein Schichtenmodell (2002) 0.02
    0.022843221 = product of:
      0.045686442 = sum of:
        0.045686442 = product of:
          0.091372885 = sum of:
            0.091372885 = weight(_text_:x in 870) [ClassicSimilarity], result of:
              0.091372885 = score(doc=870,freq=10.0), product of:
                0.21896711 = queryWeight, product of:
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.05185498 = queryNorm
                0.41729045 = fieldWeight in 870, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.03125 = fieldNorm(doc=870)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    "Wie kann man semantische Informationen übermitteln? Die Antwort auf diese Frage ist - zur großen Verwunderung Vieler - nicht einfach XML. Die Standardsprache XML (mit ihren verschiedenen Add-Ons wie Xlink, Xpath) wird heutzutage vielfach benutzt, um Informationen zu übermitteln (etwa mittels XML/EDI für B2B-Transaktionen). Obwohl sie dabei bereits eine deutliche Erleichterung verschafft im Vergleich zu früheren ideosynkratischen Mechanismen (z.B. EDIFACT), ist XML per se nur bedingt geeignet, um semantische Zusammenhänge auszudrücken. Die Struktur eines XMLDokuments ist nicht gleichzusetzen mit der Semantik der darin enthaltenen Informationsbestandteile. Die Schemasprachen DTD (Document Type Definition) und XML-Schema sind zu schwach, um alle semantischen Zusammenhänge zu transportieren. Zur Lösung des Problems wurde ein Schichtenmodell konzipiert. Es baut auf den existierenden Standards für XML mit den Namespace-Mechanismen und XMLSchemadefinitionen auf, um Informationen auf syntaktischer Ebene zu transportieren. Allerdings wird die Ausdrucksfähigkeit von XML deutlich erweitert. Der Erweiterung liegt der Standard RDF (Ressource Description Framework) zugrunde. Mit diesem Ansatz können komplexe Aussagen über Tripel modelliert werden. "Alexander glaubt, dass Andreas ein Experte im Clustering ist", wird repräsentiert durch "Alexander glaubt X",X ist eine Aussage", "Das Subjekt von X ist Andreas",das Prädikat von X ist Experteln" und "das Objekt von X ist Clustering". Jede Ressource wird durch ein URI (Uniform Ressource Identifier) repräsentiert, z.B. wäre für Andreas www.aifb.unikarlsruhe.de/WBS/aho eine mögliche URI. Mit RDF Schema können darüber hinaus Gattungshierarchien aufgebaut werden, etwa um auszudrücken: "Andreas ist ein Experte" oder - exakter dargestellt - "Das Ding" hinter www.aifb.unikarlsruhe.de/WBS/aho ist ein Experte". Die darauf folgenden Ebenen des Schichtenmodells befassen sich mit einer zunehmend feineren Darstellung von inhaltlichen Beziehungen. Zum Beispiel umfassen semantische Technologien sogenannte "Ontologien", die für ein Fachgebiet nicht nur Kategorisierungen, sondern auch Regeln beschreiben. Mit ontologischen Regelmechanismen lassen sich auch implizite Verknüpfungen erkennen."
  7. Multimedia content and the Semantic Web : methods, standards, and tools (2005) 0.02
    0.022652768 = sum of:
      0.007441822 = product of:
        0.029767288 = sum of:
          0.029767288 = weight(_text_:authors in 150) [ClassicSimilarity], result of:
            0.029767288 = score(doc=150,freq=2.0), product of:
              0.2363972 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05185498 = queryNorm
              0.12592064 = fieldWeight in 150, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.01953125 = fieldNorm(doc=150)
        0.25 = coord(1/4)
      0.015210945 = product of:
        0.03042189 = sum of:
          0.03042189 = weight(_text_:22 in 150) [ClassicSimilarity], result of:
            0.03042189 = score(doc=150,freq=6.0), product of:
              0.18158731 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05185498 = queryNorm
              0.16753313 = fieldWeight in 150, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.01953125 = fieldNorm(doc=150)
        0.5 = coord(1/2)
    
    Classification
    006.7 22
    Date
    7. 3.2007 19:30:22
    DDC
    006.7 22
    Footnote
    Rez. in: JASIST 58(2007) no.3, S.457-458 (A.M.A. Ahmad): "The concept of the semantic web has emerged because search engines and text-based searching are no longer adequate, as these approaches involve an extensive information retrieval process. The deployed searching and retrieving descriptors arc naturally subjective and their deployment is often restricted to the specific application domain for which the descriptors were configured. The new era of information technology imposes different kinds of requirements and challenges. Automatic extracted audiovisual features are required, as these features are more objective, domain-independent, and more native to audiovisual content. This book is a useful guide for researchers, experts, students, and practitioners; it is a very valuable reference and can lead them through their exploration and research in multimedia content and the semantic web. The book is well organized, and introduces the concept of the semantic web and multimedia content analysis to the reader through a logical sequence from standards and hypotheses through system examples, presenting relevant tools and methods. But in some chapters readers will need a good technical background to understand some of the details. Readers may attain sufficient knowledge here to start projects or research related to the book's theme; recent results and articles related to the active research area of integrating multimedia with semantic web technologies are included. This book includes full descriptions of approaches to specific problem domains such as content search, indexing, and retrieval. This book will be very useful to researchers in the multimedia content analysis field who wish to explore the benefits of emerging semantic web technologies in applying multimedia content approaches. The first part of the book covers the definition of the two basic terms multimedia content and semantic web. The Moving Picture Experts Group standards MPEG7 and MPEG21 are quoted extensively. In addition, the means of multimedia content description are elaborated upon and schematically drawn. This extensive description is introduced by authors who are actively involved in those standards and have been participating in the work of the International Organization for Standardization (ISO)/MPEG for many years. On the other hand, this results in bias against the ad hoc or nonstandard tools for multimedia description in favor of the standard approaches. This is a general book for multimedia content; more emphasis on the general multimedia description and extraction could be provided.
  8. Call, A.; Gottlob, G.; Pieris, A.: ¬The return of the entity-relationship model : ontological query answering (2012) 0.02
    0.022117855 = product of:
      0.04423571 = sum of:
        0.04423571 = product of:
          0.08847142 = sum of:
            0.08847142 = weight(_text_:x in 434) [ClassicSimilarity], result of:
              0.08847142 = score(doc=434,freq=6.0), product of:
                0.21896711 = queryWeight, product of:
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.05185498 = queryNorm
                0.40403977 = fieldWeight in 434, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=434)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Entity-Relationship (ER) model is a fundamental formalism for conceptual modeling in database design; it was introduced by Chen in his milestone paper, and it is now widely used, being flexible and easily understood by practitioners. With the rise of the Semantic Web, conceptual modeling formalisms have gained importance again as ontology formalisms, in the Semantic Web parlance. Ontologies and conceptual models are aimed at representing, rather than the structure of data, the domain of interest, that is, the fragment of the real world that is being represented by the data and the schema. A prominent formalism for modeling ontologies are Description Logics (DLs), which are decidable fragments of first-order logic, particularly suitable for ontological modeling and querying. In particular, DL ontologies are sets of assertions describing sets of objects and (usually binary) relations among such sets, exactly in the same fashion as the ER model. Recently, research on DLs has been focusing on the problem of answering queries under ontologies, that is, given a query q, an instance B, and an ontology X, answering q under B and amounts to compute the answers that are logically entailed from B by using the assertions of X. In this context, where data size is usually large, a central issue the data complexity of query answering, i.e., the computational complexity with respect to the data set B only, while the ontology X and the query q are fixed.
  9. Urro, R.; Winiwarter, W.: Specifying ontologies : Linguistic aspects in problem-driven knowledge engineering (2001) 0.02
    0.021670982 = product of:
      0.043341964 = sum of:
        0.043341964 = product of:
          0.08668393 = sum of:
            0.08668393 = weight(_text_:x in 3263) [ClassicSimilarity], result of:
              0.08668393 = score(doc=3263,freq=4.0), product of:
                0.21896711 = queryWeight, product of:
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.05185498 = queryNorm
                0.39587647 = fieldWeight in 3263, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3263)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The WWW includes on various levels systems of signs, not all of which are standardized as necessary for a real Semantic Web and not all of which can be standardized. Linguistic theories can contribute not only to the thus needed translation between sign systems, be they natural language systems or otherwise structured systems of knowledge representation, but also, of course, to standardization efforts. Within the current EC3 research framework for x-commerce, linguistic theories will play their part as they provide modeling analogies and patterns for the construction of a central knowledge base.
    Isbn
    0-7695-1393-X
  10. Broughton, V.: Automatic metadata generation : Digital resource description without human intervention (2007) 0.02
    0.0210769 = product of:
      0.0421538 = sum of:
        0.0421538 = product of:
          0.0843076 = sum of:
            0.0843076 = weight(_text_:22 in 6048) [ClassicSimilarity], result of:
              0.0843076 = score(doc=6048,freq=2.0), product of:
                0.18158731 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05185498 = queryNorm
                0.46428138 = fieldWeight in 6048, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6048)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.2007 15:41:14
  11. Tudhope, D.: Knowledge Organization System Services : brief review of NKOS activities and possibility of KOS registries (2007) 0.02
    0.0210769 = product of:
      0.0421538 = sum of:
        0.0421538 = product of:
          0.0843076 = sum of:
            0.0843076 = weight(_text_:22 in 100) [ClassicSimilarity], result of:
              0.0843076 = score(doc=100,freq=2.0), product of:
                0.18158731 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05185498 = queryNorm
                0.46428138 = fieldWeight in 100, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=100)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.2007 15:41:14
  12. Heflin, J.; Hendler, J.: ¬A portrait of the Semantic Web in action (2001) 0.02
    0.018045459 = product of:
      0.036090918 = sum of:
        0.036090918 = product of:
          0.14436367 = sum of:
            0.14436367 = weight(_text_:authors in 2547) [ClassicSimilarity], result of:
              0.14436367 = score(doc=2547,freq=6.0), product of:
                0.2363972 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.05185498 = queryNorm
                0.61068267 = fieldWeight in 2547, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2547)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    Without semantically enriched content, the Web cannot reach its full potential. The authors discuss tools and techniques for generating and processing such content, thus setting a foundation upon which to build the Semantic Web. In particular, they put a Semantic Web language through its paces and try to answer questions about how people can use it, such as, How do authors generate semantic descriptions? How do agents discover these descriptions? How can agents integrate information from different sites? How can users query the Semantic Web? The authors present a system that addresses these questions and describe tools that help users interact with the Semantic Web. They motivate the design of their system with a specific application: semantic markup for computer science.
  13. Cahier, J.-P.; Ma, X.; Zaher, L'H.: Document and item-based modeling : a hybrid method for a socio-semantic web (2010) 0.02
    0.017877648 = product of:
      0.035755295 = sum of:
        0.035755295 = product of:
          0.07151059 = sum of:
            0.07151059 = weight(_text_:x in 62) [ClassicSimilarity], result of:
              0.07151059 = score(doc=62,freq=2.0), product of:
                0.21896711 = queryWeight, product of:
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.05185498 = queryNorm
                0.32658142 = fieldWeight in 62, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=62)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  14. Wagner, S.: Barrierefreie und thesaurusbasierte Suchfunktion für das Webportal der Stadt Nürnberg (2007) 0.02
    0.017877648 = product of:
      0.035755295 = sum of:
        0.035755295 = product of:
          0.07151059 = sum of:
            0.07151059 = weight(_text_:x in 1724) [ClassicSimilarity], result of:
              0.07151059 = score(doc=1724,freq=2.0), product of:
                0.21896711 = queryWeight, product of:
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.05185498 = queryNorm
                0.32658142 = fieldWeight in 1724, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1724)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    x
  15. Ehlen, D.: Semantic Wiki : Konzeption eines Semantic MediaWiki für das Reallexikon zur Deutschen Kunstgeschichte (2010) 0.02
    0.017877648 = product of:
      0.035755295 = sum of:
        0.035755295 = product of:
          0.07151059 = sum of:
            0.07151059 = weight(_text_:x in 3689) [ClassicSimilarity], result of:
              0.07151059 = score(doc=3689,freq=2.0), product of:
                0.21896711 = queryWeight, product of:
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.05185498 = queryNorm
                0.32658142 = fieldWeight in 3689, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3689)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    x
  16. Papadakis, I. et al.: Highlighting timely information in libraries through social and semantic Web technologies (2016) 0.02
    0.017564086 = product of:
      0.035128172 = sum of:
        0.035128172 = product of:
          0.070256345 = sum of:
            0.070256345 = weight(_text_:22 in 2090) [ClassicSimilarity], result of:
              0.070256345 = score(doc=2090,freq=2.0), product of:
                0.18158731 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05185498 = queryNorm
                0.38690117 = fieldWeight in 2090, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2090)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  17. Schäfer, D.: Konzeption, prototypische Implementierung und Evaluierung eines RDF-basierten Bibliothekskatalogs für Online-Dissertationen (2008) 0.02
    0.015323698 = product of:
      0.030647395 = sum of:
        0.030647395 = product of:
          0.06129479 = sum of:
            0.06129479 = weight(_text_:x in 2293) [ClassicSimilarity], result of:
              0.06129479 = score(doc=2293,freq=2.0), product of:
                0.21896711 = queryWeight, product of:
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.05185498 = queryNorm
                0.27992693 = fieldWeight in 2293, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2293)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    x
  18. Hüsken, P.: Information Retrieval im Semantic Web (2006) 0.02
    0.015323698 = product of:
      0.030647395 = sum of:
        0.030647395 = product of:
          0.06129479 = sum of:
            0.06129479 = weight(_text_:x in 4333) [ClassicSimilarity], result of:
              0.06129479 = score(doc=4333,freq=2.0), product of:
                0.21896711 = queryWeight, product of:
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.05185498 = queryNorm
                0.27992693 = fieldWeight in 4333, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4333)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    x
  19. Aufreiter, M.: Informationsvisualisierung und Navigation im Semantic Web (2008) 0.02
    0.015323698 = product of:
      0.030647395 = sum of:
        0.030647395 = product of:
          0.06129479 = sum of:
            0.06129479 = weight(_text_:x in 4711) [ClassicSimilarity], result of:
              0.06129479 = score(doc=4711,freq=2.0), product of:
                0.21896711 = queryWeight, product of:
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.05185498 = queryNorm
                0.27992693 = fieldWeight in 4711, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4711)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    x
  20. Smith, D.A.: Exploratory and faceted browsing over heterogeneous and cross-domain data sources. (2011) 0.02
    0.015323698 = product of:
      0.030647395 = sum of:
        0.030647395 = product of:
          0.06129479 = sum of:
            0.06129479 = weight(_text_:x in 4839) [ClassicSimilarity], result of:
              0.06129479 = score(doc=4839,freq=2.0), product of:
                0.21896711 = queryWeight, product of:
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.05185498 = queryNorm
                0.27992693 = fieldWeight in 4839, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4839)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    x

Authors

Languages

  • e 50
  • d 18

Types

  • a 36
  • el 20
  • x 14
  • m 10
  • s 4
  • n 1
  • More… Less…