Search (17 results, page 1 of 1)

  • × theme_ss:"Semantic Web"
  • × type_ss:"el"
  • × year_i:[2010 TO 2020}
  1. Eckert, K.: SKOS: eine Sprache für die Übertragung von Thesauri ins Semantic Web (2011) 0.03
    0.027233064 = product of:
      0.06808266 = sum of:
        0.05272096 = weight(_text_:system in 4331) [ClassicSimilarity], result of:
          0.05272096 = score(doc=4331,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.3936941 = fieldWeight in 4331, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=4331)
        0.015361699 = product of:
          0.046085097 = sum of:
            0.046085097 = weight(_text_:22 in 4331) [ClassicSimilarity], result of:
              0.046085097 = score(doc=4331,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.30952093 = fieldWeight in 4331, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4331)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Das Semantic Web - bzw. Linked Data - hat das Potenzial, die Verfügbarkeit von Daten und Wissen, sowie den Zugriff darauf zu revolutionieren. Einen großen Beitrag dazu können Wissensorganisationssysteme wie Thesauri leisten, die die Daten inhaltlich erschließen und strukturieren. Leider sind immer noch viele dieser Systeme lediglich in Buchform oder in speziellen Anwendungen verfügbar. Wie also lassen sie sich für das Semantic Web nutzen? Das Simple Knowledge Organization System (SKOS) bietet eine Möglichkeit, die Wissensorganisationssysteme in eine Form zu "übersetzen", die im Web zitiert und mit anderen Resourcen verknüpft werden kann.
    Date
    15. 3.2011 19:21:22
    Source
    http://metadaten-twr.org/2011/01/19/skos-simple-knowledge-organisation-system/
  2. Mirizzi, R.: Exploratory browsing in the Web of Data (2011) 0.02
    0.022501035 = product of:
      0.056252588 = sum of:
        0.03994287 = weight(_text_:context in 4803) [ClassicSimilarity], result of:
          0.03994287 = score(doc=4803,freq=4.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.22666055 = fieldWeight in 4803, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4803)
        0.016309716 = weight(_text_:system in 4803) [ClassicSimilarity], result of:
          0.016309716 = score(doc=4803,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.1217929 = fieldWeight in 4803, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4803)
      0.4 = coord(2/5)
    
    Abstract
    The Linked Data initiative and the state of the art in semantic technologies led off all brand new search and mash-up applications. The basic idea is to have smarter lookup services for a huge, distributed and social knowledge base. All these applications catch and (re)propose, under a semantic data perspective, the view of the classical Web as a distributed collection of documents to retrieve. The interlinked nature of the Web, and consequently of the Semantic Web, is exploited (just) to collect and aggregate data coming from different sources. Of course, this is a big step forward in search and Web technologies, but if we limit our investi- gation to retrieval tasks, we miss another important feature of the current Web: browsing and in particular exploratory browsing (a.k.a. exploratory search). Thanks to its hyperlinked nature, the Web defined a new way of browsing documents and knowledge: selection by lookup, navigation and trial-and-error tactics were, and still are, exploited by users to search for relevant information satisfying some initial requirements. The basic assumptions behind a lookup search, typical of Information Retrieval (IR) systems, are no more valid in an exploratory browsing context. An IR system, such as a search engine, assumes that: the user has a clear picture of what she is looking for ; she knows the terminology of the specific knowledge space. On the other side, as argued in, the main challenges in exploratory search can be summarized as: support querying and rapid query refinement; other facets and metadata-based result filtering; leverage search context; support learning and understanding; other visualization to support insight/decision making; facilitate collaboration. In Section 3 we will show two applications for exploratory search in the Semantic Web addressing some of the above challenges.
  3. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.01
    0.013160261 = product of:
      0.032900654 = sum of:
        0.023299592 = weight(_text_:system in 4553) [ClassicSimilarity], result of:
          0.023299592 = score(doc=4553,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.17398985 = fieldWeight in 4553, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4553)
        0.009601062 = product of:
          0.028803186 = sum of:
            0.028803186 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
              0.028803186 = score(doc=4553,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.19345059 = fieldWeight in 4553, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4553)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Semantic Web knowledge representation standards, and in particular RDF and OWL, often come endowed with a formal semantics which is considered to be of fundamental importance for the field. Reasoning, i.e., the drawing of logical inferences from knowledge expressed in such standards, is traditionally based on logical deductive methods and algorithms which can be proven to be sound and complete and terminating, i.e. correct in a very strong sense. For various reasons, though, in particular the scalability issues arising from the ever increasing amounts of Semantic Web data available and the inability of deductive algorithms to deal with noise in the data, it has been argued that alternative means of reasoning should be investigated which bear high promise for high scalability and better robustness. From this perspective, deductive algorithms can be considered the gold standard regarding correctness against which alternative methods need to be tested. In this paper, we show that it is possible to train a Deep Learning system on RDF knowledge graphs, such that it is able to perform reasoning over new RDF knowledge graphs, with high precision and recall compared to the deductive gold standard.
    Date
    16.11.2018 14:22:01
  4. Cahier, J.-P.; Zaher, L'H.; Isoard , G.: Document et modèle pour l'action, une méthode pour le web socio-sémantique : application à un web 2.0 en développement durable (2010) 0.01
    0.012558117 = product of:
      0.06279058 = sum of:
        0.06279058 = weight(_text_:index in 4836) [ClassicSimilarity], result of:
          0.06279058 = score(doc=4836,freq=2.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.33795667 = fieldWeight in 4836, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4836)
      0.2 = coord(1/5)
    
    Abstract
    We present the DOCMA method (DOCument and Model for Action) focused to Socio-Semantic web applications in large communities of interest. DOCMA is dedicated to end-users without any knowledge in Information Science. Community Members can elicit, structure and index shared business items emerging from their inquiry (such as projects, actors, products, geographically situated objects of interest.). We apply DOCMA to an experiment in the field of Sustainable Development: the Cartodd-Map21 collaborative Web portal.
  5. Kara, S.: ¬An ontology-based retrieval system using semantic indexing (2012) 0.01
    0.011183805 = product of:
      0.055919025 = sum of:
        0.055919025 = weight(_text_:system in 3829) [ClassicSimilarity], result of:
          0.055919025 = score(doc=3829,freq=8.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.41757566 = fieldWeight in 3829, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=3829)
      0.2 = coord(1/5)
    
    Abstract
    In this thesis, we present an ontology-based information extraction and retrieval system and its application to soccer domain. In general, we deal with three issues in semantic search, namely, usability, scalability and retrieval performance. We propose a keyword-based semantic retrieval approach. The performance of the system is improved considerably using domain-specific information extraction, inference and rules. Scalability is achieved by adapting a semantic indexing approach. The system is implemented using the state-of-the-art technologies in SemanticWeb and its performance is evaluated against traditional systems as well as the query expansion methods. Furthermore, a detailed evaluation is provided to observe the performance gain due to domain-specific information extraction and inference. Finally, we show how we use semantic indexing to solve simple structural ambiguities.
  6. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.01
    0.009258853 = product of:
      0.046294264 = sum of:
        0.046294264 = product of:
          0.06944139 = sum of:
            0.034877572 = weight(_text_:29 in 4649) [ClassicSimilarity], result of:
              0.034877572 = score(doc=4649,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.23319192 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
            0.03456382 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
              0.03456382 = score(doc=4649,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.23214069 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
          0.6666667 = coord(2/3)
      0.2 = coord(1/5)
    
    Date
    29. 7.2011 14:44:56
    26.12.2011 13:40:22
  7. Harlow, C.: Data munging tools in Preparation for RDF : Catmandu and LODRefine (2015) 0.01
    0.008069678 = product of:
      0.040348392 = sum of:
        0.040348392 = weight(_text_:context in 2277) [ClassicSimilarity], result of:
          0.040348392 = score(doc=2277,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.22896172 = fieldWeight in 2277, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2277)
      0.2 = coord(1/5)
    
    Abstract
    Data munging, or the work of remediating, enhancing and transforming library datasets for new or improved uses, has become more important and staff-inclusive in many library technology discussions and projects. Many times we know how we want our data to look, as well as how we want our data to act in discovery interfaces or when exposed, but we are uncertain how to make the data we have into the data we want. This article introduces and compares two library data munging tools that can help: LODRefine (OpenRefine with the DERI RDF Extension) and Catmandu. The strengths and best practices of each tool are discussed in the context of metadata munging use cases for an institution's metadata migration workflow. There is a focus on Linked Open Data modeling and transformation applications of each tool, in particular how metadataists, catalogers, and programmers can create metadata quality reports, enhance existing data with LOD sets, and transform that data to a RDF model. Integration of these tools with other systems and projects, the use of domain specific transformation languages, and the expansion of vocabulary reconciliation services are mentioned.
  8. Hogan, A.; Harth, A.; Umbrich, J.; Kinsella, S.; Polleres, A.; Decker, S.: Searching and browsing Linked Data with SWSE : the Semantic Web Search Engine (2011) 0.01
    0.0065901205 = product of:
      0.032950602 = sum of:
        0.032950602 = weight(_text_:system in 438) [ClassicSimilarity], result of:
          0.032950602 = score(doc=438,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.24605882 = fieldWeight in 438, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=438)
      0.2 = coord(1/5)
    
    Abstract
    In this paper, we discuss the architecture and implementation of the Semantic Web Search Engine (SWSE). Following traditional search engine architecture, SWSE consists of crawling, data enhancing, indexing and a user interface for search, browsing and retrieval of information; unlike traditional search engines, SWSE operates over RDF Web data - loosely also known as Linked Data - which implies unique challenges for the system design, architecture, algorithms, implementation and user interface. In particular, many challenges exist in adopting Semantic Web technologies for Web data: the unique challenges of the Web - in terms of scale, unreliability, inconsistency and noise - are largely overlooked by the current Semantic Web standards. Herein, we describe the current SWSE system, initially detailing the architecture and later elaborating upon the function, design, implementation and performance of each individual component. In so doing, we also give an insight into how current Semantic Web standards can be tailored, in a best-effort manner, for use on Web data. Throughout, we offer evaluation and complementary argumentation to support our design choices, and also offer discussion on future directions and open research questions. Later, we also provide candid discussion relating to the difficulties currently faced in bringing such a search engine into the mainstream, and lessons learnt from roughly six years working on the Semantic Web Search Engine project.
  9. Li, Z.: ¬A domain specific search engine with explicit document relations (2013) 0.01
    0.0065901205 = product of:
      0.032950602 = sum of:
        0.032950602 = weight(_text_:system in 1210) [ClassicSimilarity], result of:
          0.032950602 = score(doc=1210,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.24605882 = fieldWeight in 1210, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1210)
      0.2 = coord(1/5)
    
    Abstract
    The current web consists of documents that are highly heterogeneous and hard for machines to understand. The Semantic Web is a progressive movement of the Word Wide Web, aiming at converting the current web of unstructured documents to the web of data. In the Semantic Web, web documents are annotated with metadata using standardized ontology language. These annotated documents are directly processable by machines and it highly improves their usability and usefulness. In Ericsson, similar problems occur. There are massive documents being created with well-defined structures. Though these documents are about domain specific knowledge and can have rich relations, they are currently managed by a traditional search engine, which ignores the rich domain specific information and presents few data to users. Motivated by the Semantic Web, we aim to find standard ways to process these documents, extract rich domain specific information and annotate these data to documents with formal markup languages. We propose this project to develop a domain specific search engine for processing different documents and building explicit relations for them. This research project consists of the three main focuses: examining different domain specific documents and finding ways to extract their metadata; integrating a text search engine with an ontology server; exploring novel ways to build relations for documents. We implement this system and demonstrate its functions. As a prototype, the system provides required features and will be extended in the future.
  10. Baker, T.; Bermès, E.; Coyle, K.; Dunsire, G.; Isaac, A.; Murray, P.; Panzer, M.; Schneider, J.; Singer, R.; Summers, E.; Waites, W.; Young, J.; Zeng, M.: Library Linked Data Incubator Group Final Report (2011) 0.01
    0.0064557428 = product of:
      0.032278713 = sum of:
        0.032278713 = weight(_text_:context in 4796) [ClassicSimilarity], result of:
          0.032278713 = score(doc=4796,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.18316938 = fieldWeight in 4796, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.03125 = fieldNorm(doc=4796)
      0.2 = coord(1/5)
    
    Abstract
    The mission of the W3C Library Linked Data Incubator Group, chartered from May 2010 through August 2011, has been "to help increase global interoperability of library data on the Web, by bringing together people involved in Semantic Web activities - focusing on Linked Data - in the library community and beyond, building on existing initiatives, and identifying collaboration tracks for the future." In Linked Data [LINKEDDATA], data is expressed using standards such as Resource Description Framework (RDF) [RDF], which specifies relationships between things, and Uniform Resource Identifiers (URIs, or "Web addresses") [URI]. This final report of the Incubator Group examines how Semantic Web standards and Linked Data principles can be used to make the valuable information assets that library create and curate - resources such as bibliographic data, authorities, and concept schemes - more visible and re-usable outside of their original library context on the wider Web. The Incubator Group began by eliciting reports on relevant activities from parties ranging from small, independent projects to national library initiatives (see the separate report, Library Linked Data Incubator Group: Use Cases) [USECASE]. These use cases provided the starting point for the work summarized in the report: an analysis of the benefits of library Linked Data, a discussion of current issues with regard to traditional library data, existing library Linked Data initiatives, and legal rights over library data; and recommendations for next steps. The report also summarizes the results of a survey of current Linked Data technologies and an inventory of library Linked Data resources available today (see also the more detailed report, Library Linked Data Incubator Group: Datasets, Value Vocabularies, and Metadata Element Sets) [VOCABDATASET].
  11. Mirizzi, R.; Noia, T. Di: From exploratory search to Web Search and back (2010) 0.01
    0.0055919024 = product of:
      0.027959513 = sum of:
        0.027959513 = weight(_text_:system in 4802) [ClassicSimilarity], result of:
          0.027959513 = score(doc=4802,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.20878783 = fieldWeight in 4802, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=4802)
      0.2 = coord(1/5)
    
    Abstract
    The power of search is with no doubt one of the main aspects for the success of the Web. Currently available search engines on the Web allow to return results with a high precision. Nevertheless, if we limit our attention only to lookup search we are missing another important search task. In exploratory search, the user is willing not only to find documents relevant with respect to her query but she is also interested in learning, discovering and understanding novel knowledge on complex and sometimes unknown topics. In the paper we address this issue presenting LED, a web based system that aims to improve (lookup) Web search by enabling users to properly explore knowledge associated to her query. We rely on DBpedia to explore the semantics of keywords within the query thus suggesting potentially interesting related topics/keywords to the user.
  12. Maltese, V.; Farazi, F.: Towards the integration of knowledge organization systems with the linked data cloud (2011) 0.00
    0.0046599186 = product of:
      0.023299592 = sum of:
        0.023299592 = weight(_text_:system in 602) [ClassicSimilarity], result of:
          0.023299592 = score(doc=602,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.17398985 = fieldWeight in 602, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=602)
      0.2 = coord(1/5)
    
    Abstract
    In representing the shared view of all the people involved, building a Knowledge Organization System (KOS) from scratch is extremely costly, and it is therefore fundamental to reuse existing resources. This can be done by progressively extending the KOS with knowledge coming from similar KOS and by promoting interoperability among them. The linked data initiative is indeed fostering people to share and integrate their datasets into a giant network of interconnected resources. This enables different applications to interoperate and share their data. However, the integration should take into account the purpose of the datasets and make explicit the semantics. In fact, the difference in the purpose is reflected in the difference in the semantics. With this paper we (a) highlight the potential problems that may arise by not taking into account purpose and semantics, (b) make clear how the difference in the purpose is reflected in totally different semantics and (c) provide an algorithm to translate from one semantic into another as a preliminary step towards the integration of ontologies designed for different purposes. This will allow reusing the ontologies even in contexts different from those in which they were designed.
  13. Hyvönen, E.; Leskinen, P.; Tamper, M.; Keravuori, K.; Rantala, H.; Ikkala, E.; Tuominen, J.: BiographySampo - publishing and enriching biographies on the Semantic Web for digital humanities research (2019) 0.00
    0.0046599186 = product of:
      0.023299592 = sum of:
        0.023299592 = weight(_text_:system in 5799) [ClassicSimilarity], result of:
          0.023299592 = score(doc=5799,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.17398985 = fieldWeight in 5799, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5799)
      0.2 = coord(1/5)
    
    Abstract
    This paper argues for making a paradigm shift in publishing and using biographical dictionaries on the web, based on Linked Data. The idea is to provide the user with enhanced reading experience of biographies by enriching contents with data linking and reasoning. In addition, versatile tooling for 1) biographical research of individual persons as well as for 2) prosopographical research on groups of people are provided. To demonstrate and evaluate the new possibilities,we present the semantic portal "BiographySampo - Finnish Biographies on theSemantic Web". The system is based on a knowledge graph extracted automatically from a collection of 13.100 textual biographies, enriched with data linking to 16 external data sources, and by harvesting external collection data from libraries, museums, and archives. The portal was released in September 2018 for free public use at: http://biografiasampo.fi.
  14. Martínez-González, M.M.; Alvite-Díez, M.L.: Thesauri and Semantic Web : discussion of the evolution of thesauri toward their integration with the Semantic Web (2019) 0.00
    0.0046599186 = product of:
      0.023299592 = sum of:
        0.023299592 = weight(_text_:system in 5997) [ClassicSimilarity], result of:
          0.023299592 = score(doc=5997,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.17398985 = fieldWeight in 5997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5997)
      0.2 = coord(1/5)
    
    Abstract
    Thesauri are Knowledge Organization Systems (KOS), that arise from the consensus of wide communities. They have been in use for many years and are regularly updated. Whereas in the past thesauri were designed for information professionals for indexing and searching, today there is a demand for conceptual vocabularies that enable inferencing by machines. The development of the Semantic Web has brought a new opportunity for thesauri, but thesauri also face the challenge of proving that they add value to it. The evolution of thesauri toward their integration with the Semantic Web is examined. Elements and structures in the thesaurus standard, ISO 25964, and SKOS (Simple Knowledge Organization System), the Semantic Web standard for representing KOS, are reviewed and compared. Moreover, the integrity rules of thesauri are contrasted with the axioms of SKOS. How SKOS has been applied to represent some real thesauri is taken into account. Three thesauri are chosen for this aim: AGROVOC, EuroVoc and the UNESCO Thesaurus. Based on the results of this comparison and analysis, the benefits that Semantic Web technologies offer to thesauri, how thesauri can contribute to the Semantic Web, and the challenges that would help to improve their integration with the Semantic Web are discussed.
  15. Aslam, S.; Sonkar, S.K.: Semantic Web : an overview (2019) 0.00
    0.0031002287 = product of:
      0.015501143 = sum of:
        0.015501143 = product of:
          0.04650343 = sum of:
            0.04650343 = weight(_text_:29 in 54) [ClassicSimilarity], result of:
              0.04650343 = score(doc=54,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.31092256 = fieldWeight in 54, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=54)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    10.12.2020 9:29:12
  16. Vocht, L. De: Exploring semantic relationships in the Web of Data : Semantische relaties verkennen in data op het web (2017) 0.00
    0.0023299593 = product of:
      0.011649796 = sum of:
        0.011649796 = weight(_text_:system in 4232) [ClassicSimilarity], result of:
          0.011649796 = score(doc=4232,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.08699492 = fieldWeight in 4232, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4232)
      0.2 = coord(1/5)
    
    Abstract
    When we speak about finding relationships between resources, it is necessary to dive deeper in the structure. The graph structure of linked data where the semantics give meaning to the relationships between resources enable the execution of pathfinding algorithms. The assigned weights and heuristics are base components of such algorithms and ultimately define (the order) which resources are included in a path. These paths explain indirect connections between resources. Our third technique proposes an algorithm that optimizes the choice of resources in terms of serendipity. Some optimizations guard the consistence of candidate-paths where the coherence of consecutive connections is maximized to avoid trivial and too arbitrary paths. The implementation uses the A* algorithm, the de-facto reference when it comes to heuristically optimized minimal cost paths. The effectiveness of paths was measured based on common automatic metrics and surveys where the users could indicate their preference for paths, generated each time in a different way. Finally, all our techniques are applied to a use case about publications in digital libraries where they are aligned with information about scientific conferences and researchers. The application to this use case is a practical example because the different aspects of exploratory search come together. In fact, the techniques also evolved from the experiences when implementing the use case. Practical details about the semantic model are explained and the implementation of the search system is clarified module by module. The evaluation positions the result, a prototype of a tool to explore scientific publications, researchers and conferences next to some important alternatives.
  17. Firnkes, M.: Schöne neue Welt : der Content der Zukunft wird von Algorithmen bestimmt (2015) 0.00
    0.0023042548 = product of:
      0.011521274 = sum of:
        0.011521274 = product of:
          0.03456382 = sum of:
            0.03456382 = weight(_text_:22 in 2118) [ClassicSimilarity], result of:
              0.03456382 = score(doc=2118,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.23214069 = fieldWeight in 2118, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2118)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    5. 7.2015 22:02:31