Search (72 results, page 1 of 4)

  • × theme_ss:"Semantic Web"
  1. Synak, M.; Dabrowski, M.; Kruk, S.R.: Semantic Web and ontologies (2009) 0.06
    0.057277065 = product of:
      0.11455413 = sum of:
        0.11455413 = sum of:
          0.05794589 = weight(_text_:b in 3376) [ClassicSimilarity], result of:
            0.05794589 = score(doc=3376,freq=2.0), product of:
              0.18503809 = queryWeight, product of:
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.052226946 = queryNorm
              0.31315655 = fieldWeight in 3376, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.0625 = fieldNorm(doc=3376)
          0.05660824 = weight(_text_:22 in 3376) [ClassicSimilarity], result of:
            0.05660824 = score(doc=3376,freq=2.0), product of:
              0.18288986 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052226946 = queryNorm
              0.30952093 = fieldWeight in 3376, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=3376)
      0.5 = coord(1/2)
    
    Date
    31. 7.2010 16:58:22
    Source
    Semantic digital libraries. Eds.: S.R. Kruk, B. McDaniel
  2. Baroncini, S.; Sartini, B.; Erp, M. Van; Tomasi, F.; Gangemi, A.: Is dc:subject enough? : A landscape on iconography and iconology statements of knowledge graphs in the semantic web (2023) 0.04
    0.03525779 = sum of:
      0.020771319 = product of:
        0.083085276 = sum of:
          0.083085276 = weight(_text_:authors in 1030) [ClassicSimilarity], result of:
            0.083085276 = score(doc=1030,freq=6.0), product of:
              0.23809293 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052226946 = queryNorm
              0.34896153 = fieldWeight in 1030, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.03125 = fieldNorm(doc=1030)
        0.25 = coord(1/4)
      0.014486472 = product of:
        0.028972944 = sum of:
          0.028972944 = weight(_text_:b in 1030) [ClassicSimilarity], result of:
            0.028972944 = score(doc=1030,freq=2.0), product of:
              0.18503809 = queryWeight, product of:
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.052226946 = queryNorm
              0.15657827 = fieldWeight in 1030, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.03125 = fieldNorm(doc=1030)
        0.5 = coord(1/2)
    
    Abstract
    In the last few years, the size of Linked Open Data (LOD) describing artworks, in general or domain-specific Knowledge Graphs (KGs), is gradually increasing. This provides (art-)historians and Cultural Heritage professionals with a wealth of information to explore. Specifically, structured data about iconographical and iconological (icon) aspects, i.e. information about the subjects, concepts and meanings of artworks, are extremely valuable for the state-of-the-art of computational tools, e.g. content recognition through computer vision. Nevertheless, a data quality evaluation for art domains, fundamental for data reuse, is still missing. The purpose of this study is filling this gap with an overview of art-historical data quality in current KGs with a focus on the icon aspects. Design/methodology/approach This study's analyses are based on established KG evaluation methodologies, adapted to the domain by addressing requirements from art historians' theories. The authors first select several KGs according to Semantic Web principles. Then, the authors evaluate (1) their structures' suitability to describe icon information through quantitative and qualitative assessment and (2) their content, qualitatively assessed in terms of correctness and completeness. Findings This study's results reveal several issues on the current expression of icon information in KGs. The content evaluation shows that these domain-specific statements are generally correct but often not complete. The incompleteness is confirmed by the structure evaluation, which highlights the unsuitability of the KG schemas to describe icon information with the required granularity. Originality/value The main contribution of this work is an overview of the actual landscape of the icon information expressed in LOD. Therefore, it is valuable to cultural institutions by providing them a first domain-specific data quality evaluation. Since this study's results suggest that the selected domain information is underrepresented in Semantic Web datasets, the authors highlight the need for the creation and fostering of such information to provide a more thorough art-historical dimension to LOD.
  3. Zhitomirsky-Geffet, M.; Bar-Ilan, J.: Towards maximal unification of semantically diverse ontologies for controversial domains (2014) 0.03
    0.03492338 = sum of:
      0.020771319 = product of:
        0.083085276 = sum of:
          0.083085276 = weight(_text_:authors in 1634) [ClassicSimilarity], result of:
            0.083085276 = score(doc=1634,freq=6.0), product of:
              0.23809293 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052226946 = queryNorm
              0.34896153 = fieldWeight in 1634, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
        0.25 = coord(1/4)
      0.01415206 = product of:
        0.02830412 = sum of:
          0.02830412 = weight(_text_:22 in 1634) [ClassicSimilarity], result of:
            0.02830412 = score(doc=1634,freq=2.0), product of:
              0.18288986 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052226946 = queryNorm
              0.15476047 = fieldWeight in 1634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
        0.5 = coord(1/2)
    
    Abstract
    Purpose - Ontologies are prone to wide semantic variability due to subjective points of view of their composers. The purpose of this paper is to propose a new approach for maximal unification of diverse ontologies for controversial domains by their relations. Design/methodology/approach - Effective matching or unification of multiple ontologies for a specific domain is crucial for the success of many semantic web applications, such as semantic information retrieval and organization, document tagging, summarization and search. To this end, numerous automatic and semi-automatic techniques were proposed in the past decade that attempt to identify similar entities, mostly classes, in diverse ontologies for similar domains. Apparently, matching individual entities cannot result in full integration of ontologies' semantics without matching their inter-relations with all other-related classes (and instances). However, semantic matching of ontological relations still constitutes a major research challenge. Therefore, in this paper the authors propose a new paradigm for assessment of maximal possible matching and unification of ontological relations. To this end, several unification rules for ontological relations were devised based on ontological reference rules, and lexical and textual entailment. These rules were semi-automatically implemented to extend a given ontology with semantically matching relations from another ontology for a similar domain. Then, the ontologies were unified through these similar pairs of relations. The authors observe that these rules can be also facilitated to reveal the contradictory relations in different ontologies. Findings - To assess the feasibility of the approach two experiments were conducted with different sets of multiple personal ontologies on controversial domains constructed by trained subjects. The results for about 50 distinct ontology pairs demonstrate a good potential of the methodology for increasing inter-ontology agreement. Furthermore, the authors show that the presented methodology can lead to a complete unification of multiple semantically heterogeneous ontologies. Research limitations/implications - This is a conceptual study that presents a new approach for semantic unification of ontologies by a devised set of rules along with the initial experimental evidence of its feasibility and effectiveness. However, this methodology has to be fully automatically implemented and tested on a larger dataset in future research. Practical implications - This result has implication for semantic search, since a richer ontology, comprised of multiple aspects and viewpoints of the domain of knowledge, enhances discoverability and improves search results. Originality/value - To the best of the knowledge, this is the first study to examine and assess the maximal level of semantic relation-based ontology unification.
    Date
    20. 1.2015 18:30:22
  4. Brunetti, J.M.; Roberto García, R.: User-centered design and evaluation of overview components for semantic data exploration (2014) 0.03
    0.026144385 = sum of:
      0.011992325 = product of:
        0.0479693 = sum of:
          0.0479693 = weight(_text_:authors in 1626) [ClassicSimilarity], result of:
            0.0479693 = score(doc=1626,freq=2.0), product of:
              0.23809293 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052226946 = queryNorm
              0.20147301 = fieldWeight in 1626, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.03125 = fieldNorm(doc=1626)
        0.25 = coord(1/4)
      0.01415206 = product of:
        0.02830412 = sum of:
          0.02830412 = weight(_text_:22 in 1626) [ClassicSimilarity], result of:
            0.02830412 = score(doc=1626,freq=2.0), product of:
              0.18288986 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052226946 = queryNorm
              0.15476047 = fieldWeight in 1626, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1626)
        0.5 = coord(1/2)
    
    Abstract
    Purpose - The growing volumes of semantic data available in the web result in the need for handling the information overload phenomenon. The potential of this amount of data is enormous but in most cases it is very difficult for users to visualize, explore and use this data, especially for lay-users without experience with Semantic Web technologies. The paper aims to discuss these issues. Design/methodology/approach - The Visual Information-Seeking Mantra "Overview first, zoom and filter, then details-on-demand" proposed by Shneiderman describes how data should be presented in different stages to achieve an effective exploration. The overview is the first user task when dealing with a data set. The objective is that the user is capable of getting an idea about the overall structure of the data set. Different information architecture (IA) components supporting the overview tasks have been developed, so they are automatically generated from semantic data, and evaluated with end-users. Findings - The chosen IA components are well known to web users, as they are present in most web pages: navigation bars, site maps and site indexes. The authors complement them with Treemaps, a visualization technique for displaying hierarchical data. These components have been developed following an iterative User-Centered Design methodology. Evaluations with end-users have shown that they get easily used to them despite the fact that they are generated automatically from structured data, without requiring knowledge about the underlying semantic technologies, and that the different overview components complement each other as they focus on different information search needs. Originality/value - Obtaining semantic data sets overviews cannot be easily done with the current semantic web browsers. Overviews become difficult to achieve with large heterogeneous data sets, which is typical in the Semantic Web, because traditional IA techniques do not easily scale to large data sets. There is little or no support to obtain overview information quickly and easily at the beginning of the exploration of a new data set. This can be a serious limitation when exploring a data set for the first time, especially for lay-users. The proposal is to reuse and adapt existing IA components to provide this overview to users and show that they can be generated automatically from the thesaurus and ontologies that structure semantic data while providing a comparable user experience to traditional web sites.
    Date
    20. 1.2015 18:30:22
  5. Thuraisingham, B.: XML databases and the semantic Web (2002) 0.03
    0.025351325 = product of:
      0.05070265 = sum of:
        0.05070265 = product of:
          0.1014053 = sum of:
            0.1014053 = weight(_text_:b in 3754) [ClassicSimilarity], result of:
              0.1014053 = score(doc=3754,freq=2.0), product of:
                0.18503809 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.052226946 = queryNorm
                0.54802394 = fieldWeight in 3754, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3754)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  6. Dextre Clarke, S.G.: Challenges and opportunities for KOS standards (2007) 0.02
    0.024766104 = product of:
      0.04953221 = sum of:
        0.04953221 = product of:
          0.09906442 = sum of:
            0.09906442 = weight(_text_:22 in 4643) [ClassicSimilarity], result of:
              0.09906442 = score(doc=4643,freq=2.0), product of:
                0.18288986 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052226946 = queryNorm
                0.5416616 = fieldWeight in 4643, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4643)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.2007 15:41:14
  7. Multimedia content and the Semantic Web : methods, standards, and tools (2005) 0.02
    0.02281526 = sum of:
      0.0074952035 = product of:
        0.029980814 = sum of:
          0.029980814 = weight(_text_:authors in 150) [ClassicSimilarity], result of:
            0.029980814 = score(doc=150,freq=2.0), product of:
              0.23809293 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052226946 = queryNorm
              0.12592064 = fieldWeight in 150, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.01953125 = fieldNorm(doc=150)
        0.25 = coord(1/4)
      0.015320055 = product of:
        0.03064011 = sum of:
          0.03064011 = weight(_text_:22 in 150) [ClassicSimilarity], result of:
            0.03064011 = score(doc=150,freq=6.0), product of:
              0.18288986 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052226946 = queryNorm
              0.16753313 = fieldWeight in 150, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.01953125 = fieldNorm(doc=150)
        0.5 = coord(1/2)
    
    Classification
    006.7 22
    Date
    7. 3.2007 19:30:22
    DDC
    006.7 22
    Footnote
    Rez. in: JASIST 58(2007) no.3, S.457-458 (A.M.A. Ahmad): "The concept of the semantic web has emerged because search engines and text-based searching are no longer adequate, as these approaches involve an extensive information retrieval process. The deployed searching and retrieving descriptors arc naturally subjective and their deployment is often restricted to the specific application domain for which the descriptors were configured. The new era of information technology imposes different kinds of requirements and challenges. Automatic extracted audiovisual features are required, as these features are more objective, domain-independent, and more native to audiovisual content. This book is a useful guide for researchers, experts, students, and practitioners; it is a very valuable reference and can lead them through their exploration and research in multimedia content and the semantic web. The book is well organized, and introduces the concept of the semantic web and multimedia content analysis to the reader through a logical sequence from standards and hypotheses through system examples, presenting relevant tools and methods. But in some chapters readers will need a good technical background to understand some of the details. Readers may attain sufficient knowledge here to start projects or research related to the book's theme; recent results and articles related to the active research area of integrating multimedia with semantic web technologies are included. This book includes full descriptions of approaches to specific problem domains such as content search, indexing, and retrieval. This book will be very useful to researchers in the multimedia content analysis field who wish to explore the benefits of emerging semantic web technologies in applying multimedia content approaches. The first part of the book covers the definition of the two basic terms multimedia content and semantic web. The Moving Picture Experts Group standards MPEG7 and MPEG21 are quoted extensively. In addition, the means of multimedia content description are elaborated upon and schematically drawn. This extensive description is introduced by authors who are actively involved in those standards and have been participating in the work of the International Organization for Standardization (ISO)/MPEG for many years. On the other hand, this results in bias against the ad hoc or nonstandard tools for multimedia description in favor of the standard approaches. This is a general book for multimedia content; more emphasis on the general multimedia description and extraction could be provided.
  8. Broughton, V.: Automatic metadata generation : Digital resource description without human intervention (2007) 0.02
    0.02122809 = product of:
      0.04245618 = sum of:
        0.04245618 = product of:
          0.08491236 = sum of:
            0.08491236 = weight(_text_:22 in 6048) [ClassicSimilarity], result of:
              0.08491236 = score(doc=6048,freq=2.0), product of:
                0.18288986 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052226946 = queryNorm
                0.46428138 = fieldWeight in 6048, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6048)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.2007 15:41:14
  9. Tudhope, D.: Knowledge Organization System Services : brief review of NKOS activities and possibility of KOS registries (2007) 0.02
    0.02122809 = product of:
      0.04245618 = sum of:
        0.04245618 = product of:
          0.08491236 = sum of:
            0.08491236 = weight(_text_:22 in 100) [ClassicSimilarity], result of:
              0.08491236 = score(doc=100,freq=2.0), product of:
                0.18288986 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052226946 = queryNorm
                0.46428138 = fieldWeight in 100, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=100)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.2007 15:41:14
  10. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.02
    0.020737559 = product of:
      0.041475117 = sum of:
        0.041475117 = product of:
          0.16590047 = sum of:
            0.16590047 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.16590047 = score(doc=701,freq=2.0), product of:
                0.4427806 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.052226946 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  11. Heflin, J.; Hendler, J.: ¬A portrait of the Semantic Web in action (2001) 0.02
    0.018174903 = product of:
      0.036349807 = sum of:
        0.036349807 = product of:
          0.14539923 = sum of:
            0.14539923 = weight(_text_:authors in 2547) [ClassicSimilarity], result of:
              0.14539923 = score(doc=2547,freq=6.0), product of:
                0.23809293 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.052226946 = queryNorm
                0.61068267 = fieldWeight in 2547, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2547)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    Without semantically enriched content, the Web cannot reach its full potential. The authors discuss tools and techniques for generating and processing such content, thus setting a foundation upon which to build the Semantic Web. In particular, they put a Semantic Web language through its paces and try to answer questions about how people can use it, such as, How do authors generate semantic descriptions? How do agents discover these descriptions? How can agents integrate information from different sites? How can users query the Semantic Web? The authors present a system that addresses these questions and describe tools that help users interact with the Semantic Web. They motivate the design of their system with a specific application: semantic markup for computer science.
  12. RDF Semantics (2004) 0.02
    0.01810809 = product of:
      0.03621618 = sum of:
        0.03621618 = product of:
          0.07243236 = sum of:
            0.07243236 = weight(_text_:b in 3065) [ClassicSimilarity], result of:
              0.07243236 = score(doc=3065,freq=2.0), product of:
                0.18503809 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.052226946 = queryNorm
                0.3914457 = fieldWeight in 3065, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3065)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Editor
    Hayes, P. u. B. McBride
  13. Call, A.; Gottlob, G.; Pieris, A.: ¬The return of the entity-relationship model : ontological query answering (2012) 0.02
    0.01810809 = product of:
      0.03621618 = sum of:
        0.03621618 = product of:
          0.07243236 = sum of:
            0.07243236 = weight(_text_:b in 434) [ClassicSimilarity], result of:
              0.07243236 = score(doc=434,freq=8.0), product of:
                0.18503809 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.052226946 = queryNorm
                0.3914457 = fieldWeight in 434, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=434)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Entity-Relationship (ER) model is a fundamental formalism for conceptual modeling in database design; it was introduced by Chen in his milestone paper, and it is now widely used, being flexible and easily understood by practitioners. With the rise of the Semantic Web, conceptual modeling formalisms have gained importance again as ontology formalisms, in the Semantic Web parlance. Ontologies and conceptual models are aimed at representing, rather than the structure of data, the domain of interest, that is, the fragment of the real world that is being represented by the data and the schema. A prominent formalism for modeling ontologies are Description Logics (DLs), which are decidable fragments of first-order logic, particularly suitable for ontological modeling and querying. In particular, DL ontologies are sets of assertions describing sets of objects and (usually binary) relations among such sets, exactly in the same fashion as the ER model. Recently, research on DLs has been focusing on the problem of answering queries under ontologies, that is, given a query q, an instance B, and an ontology X, answering q under B and amounts to compute the answers that are logically entailed from B by using the assertions of X. In this context, where data size is usually large, a central issue the data complexity of query answering, i.e., the computational complexity with respect to the data set B only, while the ontology X and the query q are fixed.
  14. Dellschaft, K.; Hachenberg, C.: Repräsentation von Wissensorganisationssystemen im Semantic Web : ein Best Practice Guide (2011) 0.02
    0.017926097 = product of:
      0.035852194 = sum of:
        0.035852194 = product of:
          0.07170439 = sum of:
            0.07170439 = weight(_text_:b in 4782) [ClassicSimilarity], result of:
              0.07170439 = score(doc=4782,freq=4.0), product of:
                0.18503809 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.052226946 = queryNorm
                0.3875115 = fieldWeight in 4782, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4782)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In diesem Dokument sollen Begriffe, Prinzipien und Methoden vorgestellt werden, die sich als hilfreich bei der Erstellung von Semantic Web konformen Repräsentationen von Wissensorganisationssystemen (KOS) erwiesen haben, wie z. B. Thesauri und Klassifikationssysteme. Das Dokument richtet sich an Organisationen wie z. B. Bibliotheken, die ihre traditionellen Wissensorganisationssysteme im Rahmen des Semantic Web veröffentlichen wollen. Die in diesem Dokument beschriebenen Vorgehensweisen und Prinzipien sind nicht als normativ anzusehen. Sie sollen nur dabei helfen, von bisher gemachten Erfahrungen zu profitieren und einen leichteren Einstieg in die wichtigsten Begriffichkeiten und Techniken des Semantic Web zu bekommen. An vielen Stellen wird zudem auf weiterführende Literatur zum Thema und auf relevante Standards und Spezifikationen aus dem Bereich des Semantic Web hingewiesen.
  15. Papadakis, I. et al.: Highlighting timely information in libraries through social and semantic Web technologies (2016) 0.02
    0.017690076 = product of:
      0.03538015 = sum of:
        0.03538015 = product of:
          0.0707603 = sum of:
            0.0707603 = weight(_text_:22 in 2090) [ClassicSimilarity], result of:
              0.0707603 = score(doc=2090,freq=2.0), product of:
                0.18288986 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052226946 = queryNorm
                0.38690117 = fieldWeight in 2090, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2090)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  16. Eckert, K.: SKOS: eine Sprache für die Übertragung von Thesauri ins Semantic Web (2011) 0.01
    0.01415206 = product of:
      0.02830412 = sum of:
        0.02830412 = product of:
          0.05660824 = sum of:
            0.05660824 = weight(_text_:22 in 4331) [ClassicSimilarity], result of:
              0.05660824 = score(doc=4331,freq=2.0), product of:
                0.18288986 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052226946 = queryNorm
                0.30952093 = fieldWeight in 4331, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4331)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    15. 3.2011 19:21:22
  17. OWL Web Ontology Language Test Cases (2004) 0.01
    0.01415206 = product of:
      0.02830412 = sum of:
        0.02830412 = product of:
          0.05660824 = sum of:
            0.05660824 = weight(_text_:22 in 4685) [ClassicSimilarity], result of:
              0.05660824 = score(doc=4685,freq=2.0), product of:
                0.18288986 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052226946 = queryNorm
                0.30952093 = fieldWeight in 4685, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4685)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    14. 8.2011 13:33:22
  18. Koutsomitropoulos, D.A.; Solomou, G.D.; Alexopoulos, A.D.; Papatheodorou, T.S.: Semantic metadata interoperability and inference-based querying in digital repositories (2009) 0.01
    0.012719782 = product of:
      0.025439564 = sum of:
        0.025439564 = product of:
          0.10175826 = sum of:
            0.10175826 = weight(_text_:authors in 3731) [ClassicSimilarity], result of:
              0.10175826 = score(doc=3731,freq=4.0), product of:
                0.23809293 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.052226946 = queryNorm
                0.42738882 = fieldWeight in 3731, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3731)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    Metadata applications have evolved in time into highly structured "islands of information" about digital resources, often bearing a strong semantic interpretation. Scarcely however are these semantics being communicated in machine readable and understandable ways. At the same time, the process for transforming the implied metadata knowledge into explicit Semantic Web descriptions can be problematic and is not always evident. In this article we take upon the well-established Dublin Core metadata standard as well as other metadata schemata, which often appear in digital repositories set-ups, and suggest a proper Semantic Web OWL ontology. In this process the authors cope with discrepancies and incompatibilities, indicative of such attempts, in novel ways. Moreover, we show the potential and necessity of this approach by demonstrating inferences on the resulting ontology, instantiated with actual metadata records. The authors conclude by presenting a working prototype that provides for inference-based querying on top of digital repositories.
  19. Wohlkinger, B.; Pellegrini, T.: Semantic Systems Technologiepolitik in der Europäischen Union (2006) 0.01
    0.0126756625 = product of:
      0.025351325 = sum of:
        0.025351325 = product of:
          0.05070265 = sum of:
            0.05070265 = weight(_text_:b in 5790) [ClassicSimilarity], result of:
              0.05070265 = score(doc=5790,freq=2.0), product of:
                0.18503809 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.052226946 = queryNorm
                0.27401197 = fieldWeight in 5790, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5790)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  20. Granitzer, M.: Statistische Verfahren der Textanalyse (2006) 0.01
    0.0126756625 = product of:
      0.025351325 = sum of:
        0.025351325 = product of:
          0.05070265 = sum of:
            0.05070265 = weight(_text_:b in 5809) [ClassicSimilarity], result of:
              0.05070265 = score(doc=5809,freq=2.0), product of:
                0.18503809 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.052226946 = queryNorm
                0.27401197 = fieldWeight in 5809, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5809)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Der vorliegende Artikel bietet einen Überblick über statistische Verfahren der Textanalyse im Kontext des Semantic Webs. Als Einleitung erfolgt die Diskussion von Methoden und gängigen Techniken zur Vorverarbeitung von Texten wie z. B. Stemming oder Part-of-Speech Tagging. Die so eingeführten Repräsentationsformen dienen als Basis für statistische Merkmalsanalysen sowie für weiterführende Techniken wie Information Extraction und maschinelle Lernverfahren. Die Darstellung dieser speziellen Techniken erfolgt im Überblick, wobei auf die wichtigsten Aspekte in Bezug auf das Semantic Web detailliert eingegangen wird. Die Anwendung der vorgestellten Techniken zur Erstellung und Wartung von Ontologien sowie der Verweis auf weiterführende Literatur bilden den Abschluss dieses Artikels.

Authors

Years

Languages

  • e 56
  • d 16

Types

  • a 45
  • el 22
  • m 10
  • s 5
  • n 2
  • r 1
  • x 1
  • More… Less…