Search (71 results, page 1 of 4)

  • × language_ss:"e"
  • × theme_ss:"Semantic Web"
  • × type_ss:"el"
  1. Hogan, A.; Harth, A.; Umbrich, J.; Kinsella, S.; Polleres, A.; Decker, S.: Searching and browsing Linked Data with SWSE : the Semantic Web Search Engine (2011) 0.00
    0.002604256 = product of:
      0.019531919 = sum of:
        0.0058533465 = weight(_text_:und in 438) [ClassicSimilarity], result of:
          0.0058533465 = score(doc=438,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.12243814 = fieldWeight in 438, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=438)
        0.005833246 = weight(_text_:in in 438) [ClassicSimilarity], result of:
          0.005833246 = score(doc=438,freq=14.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.19881277 = fieldWeight in 438, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=438)
        0.0058533465 = weight(_text_:und in 438) [ClassicSimilarity], result of:
          0.0058533465 = score(doc=438,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.12243814 = fieldWeight in 438, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=438)
        0.001991979 = weight(_text_:s in 438) [ClassicSimilarity], result of:
          0.001991979 = score(doc=438,freq=4.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.08494043 = fieldWeight in 438, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=438)
      0.13333334 = coord(4/30)
    
    Abstract
    In this paper, we discuss the architecture and implementation of the Semantic Web Search Engine (SWSE). Following traditional search engine architecture, SWSE consists of crawling, data enhancing, indexing and a user interface for search, browsing and retrieval of information; unlike traditional search engines, SWSE operates over RDF Web data - loosely also known as Linked Data - which implies unique challenges for the system design, architecture, algorithms, implementation and user interface. In particular, many challenges exist in adopting Semantic Web technologies for Web data: the unique challenges of the Web - in terms of scale, unreliability, inconsistency and noise - are largely overlooked by the current Semantic Web standards. Herein, we describe the current SWSE system, initially detailing the architecture and later elaborating upon the function, design, implementation and performance of each individual component. In so doing, we also give an insight into how current Semantic Web standards can be tailored, in a best-effort manner, for use on Web data. Throughout, we offer evaluation and complementary argumentation to support our design choices, and also offer discussion on future directions and open research questions. Later, we also provide candid discussion relating to the difficulties currently faced in bringing such a search engine into the mainstream, and lessons learnt from roughly six years working on the Semantic Web Search Engine project.
    Content
    Vgl.: http://swse.deri.org/ und http://swse.org/.
  2. Martínez-González, M.M.; Alvite-Díez, M.L.: Thesauri and Semantic Web : discussion of the evolution of thesauri toward their integration with the Semantic Web (2019) 0.00
    0.0022578656 = product of:
      0.01693399 = sum of:
        0.0058533465 = weight(_text_:und in 5997) [ClassicSimilarity], result of:
          0.0058533465 = score(doc=5997,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.12243814 = fieldWeight in 5997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5997)
        0.0038187557 = weight(_text_:in in 5997) [ClassicSimilarity], result of:
          0.0038187557 = score(doc=5997,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.1301535 = fieldWeight in 5997, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5997)
        0.0058533465 = weight(_text_:und in 5997) [ClassicSimilarity], result of:
          0.0058533465 = score(doc=5997,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.12243814 = fieldWeight in 5997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5997)
        0.0014085418 = weight(_text_:s in 5997) [ClassicSimilarity], result of:
          0.0014085418 = score(doc=5997,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.060061958 = fieldWeight in 5997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5997)
      0.13333334 = coord(4/30)
    
    Abstract
    Thesauri are Knowledge Organization Systems (KOS), that arise from the consensus of wide communities. They have been in use for many years and are regularly updated. Whereas in the past thesauri were designed for information professionals for indexing and searching, today there is a demand for conceptual vocabularies that enable inferencing by machines. The development of the Semantic Web has brought a new opportunity for thesauri, but thesauri also face the challenge of proving that they add value to it. The evolution of thesauri toward their integration with the Semantic Web is examined. Elements and structures in the thesaurus standard, ISO 25964, and SKOS (Simple Knowledge Organization System), the Semantic Web standard for representing KOS, are reviewed and compared. Moreover, the integrity rules of thesauri are contrasted with the axioms of SKOS. How SKOS has been applied to represent some real thesauri is taken into account. Three thesauri are chosen for this aim: AGROVOC, EuroVoc and the UNESCO Thesaurus. Based on the results of this comparison and analysis, the benefits that Semantic Web technologies offer to thesauri, how thesauri can contribute to the Semantic Web, and the challenges that would help to improve their integration with the Semantic Web are discussed.
    Source
    IEEE Access. 7(2019) no.153, S.151-170
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  3. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.00
    0.0021588372 = product of:
      0.021588372 = sum of:
        0.0052914224 = weight(_text_:in in 4649) [ClassicSimilarity], result of:
          0.0052914224 = score(doc=4649,freq=8.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.18034597 = fieldWeight in 4649, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4649)
        0.007529726 = product of:
          0.022589177 = sum of:
            0.022589177 = weight(_text_:l in 4649) [ClassicSimilarity], result of:
              0.022589177 = score(doc=4649,freq=2.0), product of:
                0.0857324 = queryWeight, product of:
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.021569785 = queryNorm
                0.26348472 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
          0.33333334 = coord(1/3)
        0.008767224 = product of:
          0.017534448 = sum of:
            0.017534448 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
              0.017534448 = score(doc=4649,freq=2.0), product of:
                0.07553371 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021569785 = queryNorm
                0.23214069 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
          0.5 = coord(1/2)
      0.1 = coord(3/30)
    
    Abstract
    More and more cultural heritage institutions publish their collections, vocabularies and metadata on the Web. The resulting Web of linked cultural data opens up exciting new possibilities for searching and browsing through these cultural heritage collections. We report on ongoing work in which we investigate the estimation of relevance in this Web of Culture. We study existing measures of semantic distance and how they apply to two use cases. The use cases relate to the structured, multilingual and multimodal nature of the Culture Web. We distinguish between measures using the Web, such as Google distance and PMI, and measures using the Linked Data Web, i.e. the semantic structure of metadata vocabularies. We perform a small study in which we compare these semantic distance measures to human judgements of relevance. Although it is too early to draw any definitive conclusions, the study provides new insights into the applicability of semantic distance measures to the Web of Culture, and clear starting points for further research.
    Date
    26.12.2011 13:40:22
  4. Van de Sompel, H.: Thoughts about repositories, use, and re-use (2008) 0.00
    0.002125876 = product of:
      0.06377628 = sum of:
        0.06377628 = weight(_text_:deutsche in 4366) [ClassicSimilarity], result of:
          0.06377628 = score(doc=4366,freq=2.0), product of:
            0.10186133 = queryWeight, product of:
              4.7224083 = idf(docFreq=1068, maxDocs=44218)
              0.021569785 = queryNorm
            0.6261088 = fieldWeight in 4366, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7224083 = idf(docFreq=1068, maxDocs=44218)
              0.09375 = fieldNorm(doc=4366)
      0.033333335 = coord(1/30)
    
    Content
    Vortrag "One more step towards the European digital library: International Conference, Deutsche Nationalbibliothek, Frankfurt am Main 31 January - 1 February 2008.
  5. Scheir, P.; Pammer, V.; Lindstaedt, S.N.: Information retrieval on the Semantic Web : does it exist? (2007) 0.00
    0.0018361328 = product of:
      0.018361328 = sum of:
        0.008194685 = weight(_text_:und in 4329) [ClassicSimilarity], result of:
          0.008194685 = score(doc=4329,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.17141339 = fieldWeight in 4329, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4329)
        0.008194685 = weight(_text_:und in 4329) [ClassicSimilarity], result of:
          0.008194685 = score(doc=4329,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.17141339 = fieldWeight in 4329, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4329)
        0.0019719584 = weight(_text_:s in 4329) [ClassicSimilarity], result of:
          0.0019719584 = score(doc=4329,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.08408674 = fieldWeight in 4329, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4329)
      0.1 = coord(3/30)
    
    Content
    Enthält einen Überblick über Modelle, Systeme und Projekte
    Pages
    S.252-257
  6. Schreiber, G.: Principles and pragmatics of a Semantic Culture Web : tearing down walls and building bridges (2008) 0.00
    0.0017715634 = product of:
      0.0531469 = sum of:
        0.0531469 = weight(_text_:deutsche in 3764) [ClassicSimilarity], result of:
          0.0531469 = score(doc=3764,freq=2.0), product of:
            0.10186133 = queryWeight, product of:
              4.7224083 = idf(docFreq=1068, maxDocs=44218)
              0.021569785 = queryNorm
            0.52175736 = fieldWeight in 3764, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7224083 = idf(docFreq=1068, maxDocs=44218)
              0.078125 = fieldNorm(doc=3764)
      0.033333335 = coord(1/30)
    
    Content
    Vortrag "One more step towards the European digital library: International Conference, Deutsche Nationalbibliothek, Frankfurt am Main 31 January - 1 February 2008. Vgl. auch: http://www.cs.vu.nl/~guus/talks/08-frankfurt.pdf.
  7. Zhang, L.; Liu, Q.L.; Zhang, J.; Wang, H.F.; Pan, Y.; Yu, Y.: Semplore: an IR approach to scalable hybrid query of Semantic Web data (2007) 0.00
    0.0013083849 = product of:
      0.013083849 = sum of:
        0.0054005357 = weight(_text_:in in 231) [ClassicSimilarity], result of:
          0.0054005357 = score(doc=231,freq=12.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.18406484 = fieldWeight in 231, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=231)
        0.006274772 = product of:
          0.018824315 = sum of:
            0.018824315 = weight(_text_:l in 231) [ClassicSimilarity], result of:
              0.018824315 = score(doc=231,freq=2.0), product of:
                0.0857324 = queryWeight, product of:
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.021569785 = queryNorm
                0.2195706 = fieldWeight in 231, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=231)
          0.33333334 = coord(1/3)
        0.0014085418 = weight(_text_:s in 231) [ClassicSimilarity], result of:
          0.0014085418 = score(doc=231,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.060061958 = fieldWeight in 231, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=231)
      0.1 = coord(3/30)
    
    Abstract
    As an extension to the current Web, Semantic Web will not only contain structured data with machine understandable semantics but also textual information. While structured queries can be used to find information more precisely on the Semantic Web, keyword searches are still needed to help exploit textual information. It thus becomes very important that we can combine precise structured queries with imprecise keyword searches to have a hybrid query capability. In addition, due to the huge volume of information on the Semantic Web, the hybrid query must be processed in a very scalable way. In this paper, we define such a hybrid query capability that combines unary tree-shaped structured queries with keyword searches. We show how existing information retrieval (IR) index structures and functions can be reused to index semantic web data and its textual information, and how the hybrid query is evaluated on the index structure using IR engines in an efficient and scalable manner. We implemented this IR approach in an engine called Semplore. Comprehensive experiments on its performance show that it is a promising approach. It leads us to believe that it may be possible to evolve current web search engines to query and search the Semantic Web. Finally, we briefy describe how Semplore is used for searching Wikipedia and an IBM customer's product information.
    Pages
    S.652-665
    Series
    Lecture notes in computer science; 4825
  8. Carbonaro, A.; Santandrea, L.: ¬A general Semantic Web approach for data analysis on graduates statistics 0.00
    0.0012092832 = product of:
      0.0120928325 = sum of:
        0.004409519 = weight(_text_:in in 5309) [ClassicSimilarity], result of:
          0.004409519 = score(doc=5309,freq=8.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.15028831 = fieldWeight in 5309, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5309)
        0.006274772 = product of:
          0.018824315 = sum of:
            0.018824315 = weight(_text_:l in 5309) [ClassicSimilarity], result of:
              0.018824315 = score(doc=5309,freq=2.0), product of:
                0.0857324 = queryWeight, product of:
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.021569785 = queryNorm
                0.2195706 = fieldWeight in 5309, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5309)
          0.33333334 = coord(1/3)
        0.0014085418 = weight(_text_:s in 5309) [ClassicSimilarity], result of:
          0.0014085418 = score(doc=5309,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.060061958 = fieldWeight in 5309, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5309)
      0.1 = coord(3/30)
    
    Abstract
    Currently, several datasets released in a Linked Open Data format are available at a national and international level, but the lack of shared strategies concerning the definition of concepts related to the statistical publishing community makes difficult a comparison among given facts starting from different data sources. In order to guarantee a shared representation framework for what concerns the dissemination of statistical concepts about graduates, we developed SW4AL, an ontology-based system for graduate's surveys domain. The developed system transforms low-level data into an enriched information model and is based on the AlmaLaurea surveys covering more than 90% of Italian graduates. SW4AL: i) semantically describes the different peculiarities of the graduates; ii) promotes the structured definition of the AlmaLaurea data and the following publication in the Linked Open Data context; iii) provides their reuse in the open data scope; iv) enables logical reasoning about knowledge representation. SW4AL establishes a common semantic for addressing the concept of graduate's surveys domain by proposing the creation of a SPARQL endpoint and a Web based interface for the query and the visualization of the structured data.
    Pages
    S.99-104
  9. Ding, L.; Finin, T.; Joshi, A.; Peng, Y.; Cost, R.S.; Sachs, J.; Pan, R.; Reddivari, P.; Doshi, V.: Swoogle : a Semantic Web search and metadata engine (2004) 0.00
    0.0011865686 = product of:
      0.011865687 = sum of:
        0.0026457112 = weight(_text_:in in 4704) [ClassicSimilarity], result of:
          0.0026457112 = score(doc=4704,freq=2.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.09017298 = fieldWeight in 4704, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4704)
        0.007529726 = product of:
          0.022589177 = sum of:
            0.022589177 = weight(_text_:l in 4704) [ClassicSimilarity], result of:
              0.022589177 = score(doc=4704,freq=2.0), product of:
                0.0857324 = queryWeight, product of:
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.021569785 = queryNorm
                0.26348472 = fieldWeight in 4704, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4704)
          0.33333334 = coord(1/3)
        0.0016902501 = weight(_text_:s in 4704) [ClassicSimilarity], result of:
          0.0016902501 = score(doc=4704,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.072074346 = fieldWeight in 4704, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=4704)
      0.1 = coord(3/30)
    
    Abstract
    Swoogle is a crawler-based indexing and retrieval system for the Semantic Web, i.e., for Web documents in RDF or OWL. It extracts metadata for each discovered document, and computes relations between documents. Discovered documents are also indexed by an information retrieval system which can use either character N-Gram or URIrefs as keywords to find relevant documents and to compute the similarity among a set of documents. One of the interesting properties we compute is rank, a measure of the importance of a Semantic Web document.
    Pages
    xx S
  10. Mayfield, J.; Finin, T.: Information retrieval on the Semantic Web : integrating inference and retrieval 0.00
    0.0010934502 = product of:
      0.016401753 = sum of:
        0.0061733257 = weight(_text_:in in 4330) [ClassicSimilarity], result of:
          0.0061733257 = score(doc=4330,freq=8.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.21040362 = fieldWeight in 4330, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4330)
        0.010228428 = product of:
          0.020456856 = sum of:
            0.020456856 = weight(_text_:22 in 4330) [ClassicSimilarity], result of:
              0.020456856 = score(doc=4330,freq=2.0), product of:
                0.07553371 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021569785 = queryNorm
                0.2708308 = fieldWeight in 4330, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4330)
          0.5 = coord(1/2)
      0.06666667 = coord(2/30)
    
    Abstract
    One vision of the Semantic Web is that it will be much like the Web we know today, except that documents will be enriched by annotations in machine understandable markup. These annotations will provide metadata about the documents as well as machine interpretable statements capturing some of the meaning of document content. We discuss how the information retrieval paradigm might be recast in such an environment. We suggest that retrieval can be tightly bound to inference. Doing so makes today's Web search engines useful to Semantic Web inference engines, and causes improvements in either retrieval or inference to lead directly to improvements in the other.
    Date
    12. 2.2011 17:35:22
  11. Vocht, L. De: Exploring semantic relationships in the Web of Data : Semantische relaties verkennen in data op het web (2017) 0.00
    9.4627135E-4 = product of:
      0.009462713 = sum of:
        0.005621056 = weight(_text_:in in 4232) [ClassicSimilarity], result of:
          0.005621056 = score(doc=4232,freq=52.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.19158077 = fieldWeight in 4232, product of:
              7.2111025 = tf(freq=52.0), with freq of:
                52.0 = termFreq=52.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4232)
        0.003137386 = product of:
          0.009412157 = sum of:
            0.009412157 = weight(_text_:l in 4232) [ClassicSimilarity], result of:
              0.009412157 = score(doc=4232,freq=2.0), product of:
                0.0857324 = queryWeight, product of:
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.021569785 = queryNorm
                0.1097853 = fieldWeight in 4232, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=4232)
          0.33333334 = coord(1/3)
        7.042709E-4 = weight(_text_:s in 4232) [ClassicSimilarity], result of:
          7.042709E-4 = score(doc=4232,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.030030979 = fieldWeight in 4232, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4232)
      0.1 = coord(3/30)
    
    Abstract
    After the launch of the World Wide Web, it became clear that searching documentson the Web would not be trivial. Well-known engines to search the web, like Google, focus on search in web documents using keywords. The documents are structured and indexed to ensure keywords match documents as accurately as possible. However, searching by keywords does not always suice. It is oen the case that users do not know exactly how to formulate the search query or which keywords guarantee retrieving the most relevant documents. Besides that, it occurs that users rather want to browse information than looking up something specific. It turned out that there is need for systems that enable more interactivity and facilitate the gradual refinement of search queries to explore the Web. Users expect more from the Web because the short keyword-based queries they pose during search, do not suffice for all cases. On top of that, the Web is changing structurally. The Web comprises, apart from a collection of documents, more and more linked data, pieces of information structured so they can be processed by machines. The consequently applied semantics allow users to exactly indicate machines their search intentions. This is made possible by describing data following controlled vocabularies, concept lists composed by experts, published uniquely identifiable on the Web. Even so, it is still not trivial to explore data on the Web. There is a large variety of vocabularies and various data sources use different terms to identify the same concepts.
    This PhD-thesis describes how to effectively explore linked data on the Web. The main focus is on scenarios where users want to discover relationships between resources rather than finding out more about something specific. Searching for a specific document or piece of information fits in the theoretical framework of information retrieval and is associated with exploratory search. Exploratory search goes beyond 'looking up something' when users are seeking more detailed understanding, further investigation or navigation of the initial search results. The ideas behind exploratory search and querying linked data merge when it comes to the way knowledge is represented and indexed by machines - how data is structured and stored for optimal searchability. Queries and information should be aligned to facilitate that searches also reveal connections between results. This implies that they take into account the same semantic entities, relevant at that moment. To realize this, we research three techniques that are evaluated one by one in an experimental set-up to assess how well they succeed in their goals. In the end, the techniques are applied to a practical use case that focuses on forming a bridge between the Web and the use of digital libraries in scientific research. Our first technique focuses on the interactive visualization of search results. Linked data resources can be brought in relation with each other at will. This leads to complex and diverse graphs structures. Our technique facilitates navigation and supports a workflow starting from a broad overview on the data and allows narrowing down until the desired level of detail to then broaden again. To validate the flow, two visualizations where implemented and presented to test-users. The users judged the usability of the visualizations, how the visualizations fit in the workflow and to which degree their features seemed useful for the exploration of linked data.
    The ideas behind exploratory search and querying linked data merge when it comes to the way knowledge is represented and indexed by machines - how data is structured and stored for optimal searchability. eries and information should be aligned to facilitate that searches also reveal connections between results. This implies that they take into account the same semantic entities, relevant at that moment. To realize this, we research three techniques that are evaluated one by one in an experimental set-up to assess how well they succeed in their goals. In the end, the techniques are applied to a practical use case that focuses on forming a bridge between the Web and the use of digital libraries in scientific research.
    Our first technique focuses on the interactive visualization of search results. Linked data resources can be brought in relation with each other at will. This leads to complex and diverse graphs structures. Our technique facilitates navigation and supports a workflow starting from a broad overview on the data and allows narrowing down until the desired level of detail to then broaden again. To validate the flow, two visualizations where implemented and presented to test-users. The users judged the usability of the visualizations, how the visualizations fit in the workflow and to which degree their features seemed useful for the exploration of linked data. There is a difference in the way users interact with resources, visually or textually, and how resources are represented for machines to be processed by algorithms. This difference complicates bridging the users' intents and machine executable queries. It is important to implement this 'translation' mechanism to impact the search as favorable as possible in terms of performance, complexity and accuracy. To do this, we explain a second technique, that supports such a bridging component. Our second technique is developed around three features that support the search process: looking up, relating and ranking resources. The main goal is to ensure that resources in the results are as precise and relevant as possible. During the evaluation of this technique, we did not only look at the precision of the search results but also investigated how the effectiveness of the search evolved while the user executed certain actions sequentially.
    When we speak about finding relationships between resources, it is necessary to dive deeper in the structure. The graph structure of linked data where the semantics give meaning to the relationships between resources enable the execution of pathfinding algorithms. The assigned weights and heuristics are base components of such algorithms and ultimately define (the order) which resources are included in a path. These paths explain indirect connections between resources. Our third technique proposes an algorithm that optimizes the choice of resources in terms of serendipity. Some optimizations guard the consistence of candidate-paths where the coherence of consecutive connections is maximized to avoid trivial and too arbitrary paths. The implementation uses the A* algorithm, the de-facto reference when it comes to heuristically optimized minimal cost paths. The effectiveness of paths was measured based on common automatic metrics and surveys where the users could indicate their preference for paths, generated each time in a different way. Finally, all our techniques are applied to a use case about publications in digital libraries where they are aligned with information about scientific conferences and researchers. The application to this use case is a practical example because the different aspects of exploratory search come together. In fact, the techniques also evolved from the experiences when implementing the use case. Practical details about the semantic model are explained and the implementation of the search system is clarified module by module. The evaluation positions the result, a prototype of a tool to explore scientific publications, researchers and conferences next to some important alternatives.
    Content
    Proefschrift ingediend tot het behalen van de graad van Doctor in de ingenieurswetenschappen: computerwetenschappen. Vgl. unter: https://www.researchgate.net/publication/319667837_Exploring_semantic_relationships_in_the_web_of_data.
    Pages
    XXV, 184 S
  12. Heflin, J.; Hendler, J.: Semantic interoperability on the Web (2000) 0.00
    8.8767277E-4 = product of:
      0.013315091 = sum of:
        0.0030866629 = weight(_text_:in in 759) [ClassicSimilarity], result of:
          0.0030866629 = score(doc=759,freq=2.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.10520181 = fieldWeight in 759, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=759)
        0.010228428 = product of:
          0.020456856 = sum of:
            0.020456856 = weight(_text_:22 in 759) [ClassicSimilarity], result of:
              0.020456856 = score(doc=759,freq=2.0), product of:
                0.07553371 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021569785 = queryNorm
                0.2708308 = fieldWeight in 759, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=759)
          0.5 = coord(1/2)
      0.06666667 = coord(2/30)
    
    Abstract
    XML will have a profound impact on the way data is exchanged on the Internet. An important feature of this language is the separation of content from presentation, which makes it easier to select and/or reformat the data. However, due to the likelihood of numerous industry and domain specific DTDs, those who wish to integrate information will still be faced with the problem of semantic interoperability. In this paper we discuss why this problem is not solved by XML, and then discuss why the Resource Description Framework is only a partial solution. We then present the SHOE language, which we feel has many of the features necessary to enable a semantic web, and describe an existing set of tools that make it easy to use the language.
    Date
    11. 5.2013 19:22:18
  13. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.00
    8.471037E-4 = product of:
      0.012706555 = sum of:
        0.0054005357 = weight(_text_:in in 4553) [ClassicSimilarity], result of:
          0.0054005357 = score(doc=4553,freq=12.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.18406484 = fieldWeight in 4553, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4553)
        0.0073060202 = product of:
          0.0146120405 = sum of:
            0.0146120405 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
              0.0146120405 = score(doc=4553,freq=2.0), product of:
                0.07553371 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021569785 = queryNorm
                0.19345059 = fieldWeight in 4553, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4553)
          0.5 = coord(1/2)
      0.06666667 = coord(2/30)
    
    Abstract
    Semantic Web knowledge representation standards, and in particular RDF and OWL, often come endowed with a formal semantics which is considered to be of fundamental importance for the field. Reasoning, i.e., the drawing of logical inferences from knowledge expressed in such standards, is traditionally based on logical deductive methods and algorithms which can be proven to be sound and complete and terminating, i.e. correct in a very strong sense. For various reasons, though, in particular the scalability issues arising from the ever increasing amounts of Semantic Web data available and the inability of deductive algorithms to deal with noise in the data, it has been argued that alternative means of reasoning should be investigated which bear high promise for high scalability and better robustness. From this perspective, deductive algorithms can be considered the gold standard regarding correctness against which alternative methods need to be tested. In this paper, we show that it is possible to train a Deep Learning system on RDF knowledge graphs, such that it is able to perform reasoning over new RDF knowledge graphs, with high precision and recall compared to the deductive gold standard.
    Date
    16.11.2018 14:22:01
  14. Dextre Clarke, S.G.: Challenges and opportunities for KOS standards (2007) 0.00
    6.8189524E-4 = product of:
      0.020456856 = sum of:
        0.020456856 = product of:
          0.040913712 = sum of:
            0.040913712 = weight(_text_:22 in 4643) [ClassicSimilarity], result of:
              0.040913712 = score(doc=4643,freq=2.0), product of:
                0.07553371 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021569785 = queryNorm
                0.5416616 = fieldWeight in 4643, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4643)
          0.5 = coord(1/2)
      0.033333335 = coord(1/30)
    
    Date
    22. 9.2007 15:41:14
  15. Kara, S.: ¬An ontology-based retrieval system using semantic indexing (2012) 0.00
    6.272162E-4 = product of:
      0.009408242 = sum of:
        0.0064806426 = weight(_text_:in in 3829) [ClassicSimilarity], result of:
          0.0064806426 = score(doc=3829,freq=12.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.22087781 = fieldWeight in 3829, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3829)
        0.0029275995 = weight(_text_:s in 3829) [ClassicSimilarity], result of:
          0.0029275995 = score(doc=3829,freq=6.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.124836445 = fieldWeight in 3829, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=3829)
      0.06666667 = coord(2/30)
    
    Abstract
    In this thesis, we present an ontology-based information extraction and retrieval system and its application to soccer domain. In general, we deal with three issues in semantic search, namely, usability, scalability and retrieval performance. We propose a keyword-based semantic retrieval approach. The performance of the system is improved considerably using domain-specific information extraction, inference and rules. Scalability is achieved by adapting a semantic indexing approach. The system is implemented using the state-of-the-art technologies in SemanticWeb and its performance is evaluated against traditional systems as well as the query expansion methods. Furthermore, a detailed evaluation is provided to observe the performance gain due to domain-specific information extraction and inference. Finally, we show how we use semantic indexing to solve simple structural ambiguities.
    Content
    Thesis submitted to the Graduate School of Natural and Applied Sciences of Middle East Technical University in partial fulfilment of the requirements for the degree of Master of science in Computer Engineering (XII, 57 S.)
    Source
    Information Systems. 37(2012) no. 4, S.294-305
  16. Broughton, V.: Automatic metadata generation : Digital resource description without human intervention (2007) 0.00
    5.8448163E-4 = product of:
      0.017534448 = sum of:
        0.017534448 = product of:
          0.035068896 = sum of:
            0.035068896 = weight(_text_:22 in 6048) [ClassicSimilarity], result of:
              0.035068896 = score(doc=6048,freq=2.0), product of:
                0.07553371 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021569785 = queryNorm
                0.46428138 = fieldWeight in 6048, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6048)
          0.5 = coord(1/2)
      0.033333335 = coord(1/30)
    
    Date
    22. 9.2007 15:41:14
  17. Tudhope, D.: Knowledge Organization System Services : brief review of NKOS activities and possibility of KOS registries (2007) 0.00
    5.8448163E-4 = product of:
      0.017534448 = sum of:
        0.017534448 = product of:
          0.035068896 = sum of:
            0.035068896 = weight(_text_:22 in 100) [ClassicSimilarity], result of:
              0.035068896 = score(doc=100,freq=2.0), product of:
                0.07553371 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021569785 = queryNorm
                0.46428138 = fieldWeight in 100, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=100)
          0.5 = coord(1/2)
      0.033333335 = coord(1/30)
    
    Date
    22. 9.2007 15:41:14
  18. Maltese, V.; Farazi, F.: Towards the integration of knowledge organization systems with the linked data cloud (2011) 0.00
    5.58707E-4 = product of:
      0.008380604 = sum of:
        0.006972062 = weight(_text_:in in 602) [ClassicSimilarity], result of:
          0.006972062 = score(doc=602,freq=20.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.2376267 = fieldWeight in 602, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=602)
        0.0014085418 = weight(_text_:s in 602) [ClassicSimilarity], result of:
          0.0014085418 = score(doc=602,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.060061958 = fieldWeight in 602, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=602)
      0.06666667 = coord(2/30)
    
    Abstract
    In representing the shared view of all the people involved, building a Knowledge Organization System (KOS) from scratch is extremely costly, and it is therefore fundamental to reuse existing resources. This can be done by progressively extending the KOS with knowledge coming from similar KOS and by promoting interoperability among them. The linked data initiative is indeed fostering people to share and integrate their datasets into a giant network of interconnected resources. This enables different applications to interoperate and share their data. However, the integration should take into account the purpose of the datasets and make explicit the semantics. In fact, the difference in the purpose is reflected in the difference in the semantics. With this paper we (a) highlight the potential problems that may arise by not taking into account purpose and semantics, (b) make clear how the difference in the purpose is reflected in totally different semantics and (c) provide an algorithm to translate from one semantic into another as a preliminary step towards the integration of ontologies designed for different purposes. This will allow reusing the ontologies even in contexts different from those in which they were designed.
    Content
    Also: In the proceedings of the UDC seminar 2011.
    Pages
    15 S
  19. Gómez-Pérez, A.; Corcho, O.: Ontology languages for the Semantic Web (2015) 0.00
    5.58707E-4 = product of:
      0.008380604 = sum of:
        0.006972062 = weight(_text_:in in 3297) [ClassicSimilarity], result of:
          0.006972062 = score(doc=3297,freq=20.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.2376267 = fieldWeight in 3297, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3297)
        0.0014085418 = weight(_text_:s in 3297) [ClassicSimilarity], result of:
          0.0014085418 = score(doc=3297,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.060061958 = fieldWeight in 3297, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3297)
      0.06666667 = coord(2/30)
    
    Abstract
    Ontologies have proven to be an essential element in many applications. They are used in agent systems, knowledge management systems, and e-commerce platforms. They can also generate natural language, integrate intelligent information, provide semantic-based access to the Internet, and extract information from texts in addition to being used in many other applications to explicitly declare the knowledge embedded in them. However, not only are ontologies useful for applications in which knowledge plays a key role, but they can also trigger a major change in current Web contents. This change is leading to the third generation of the Web-known as the Semantic Web-which has been defined as "the conceptual structuring of the Web in an explicit machine-readable way."1 This definition does not differ too much from the one used for defining an ontology: "An ontology is an explicit, machinereadable specification of a shared conceptualization."2 In fact, new ontology-based applications and knowledge architectures are developing for this new Web. A common claim for all of these approaches is the need for languages to represent the semantic information that this Web requires-solving the heterogeneous data exchange in this heterogeneous environment. Here, we don't decide which language is best of the Semantic Web. Rather, our goal is to help developers find the most suitable language for their representation needs. The authors analyze the most representative ontology languages created for the Web and compare them using a common framework.
    Source
    IEEE intelligent systems 2002, Jan./Feb., S.54-60
  20. Wielinga, B.; Wielemaker, J.; Schreiber, G.; Assem, M. van: Methods for porting resources to the Semantic Web (2004) 0.00
    5.006738E-4 = product of:
      0.007510106 = sum of:
        0.0045825066 = weight(_text_:in in 4640) [ClassicSimilarity], result of:
          0.0045825066 = score(doc=4640,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.1561842 = fieldWeight in 4640, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4640)
        0.0029275995 = weight(_text_:s in 4640) [ClassicSimilarity], result of:
          0.0029275995 = score(doc=4640,freq=6.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.124836445 = fieldWeight in 4640, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=4640)
      0.06666667 = coord(2/30)
    
    Abstract
    Ontologies will play a central role in the development of the Semantic Web. It is unrealistic to assume that such ontologies will be developed from scratch. Rather, we assume that existing resources such as thesauri and lexical data bases will be reused in the development of ontologies for the Semantic Web. In this paper we describe a method for converting existing source material to a representation that is compatible with Semantic Web languages such as RDF(S) and OWL. The method is illustrated with three case studies: converting Wordnet, AAT and MeSH to RDF(S) and OWL.
    Source
    Proceedings of the First European Semantic Web Symposium (ESWS2004), Eds.: C. Bussler, J. Davies, D. Fensel and R. Studer. 2004. S.299-311

Years

Types

  • a 23
  • n 8
  • x 2
  • r 1
  • More… Less…