Search (23 results, page 1 of 2)

  • × theme_ss:"Semantic Web"
  • × year_i:[2010 TO 2020}
  1. Keyser, P. de: Indexing : from thesauri to the Semantic Web (2012) 0.08
    0.07979104 = product of:
      0.2393731 = sum of:
        0.2393731 = sum of:
          0.19898356 = weight(_text_:indexing in 3197) [ClassicSimilarity], result of:
            0.19898356 = score(doc=3197,freq=34.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              1.0462552 = fieldWeight in 3197, product of:
                5.8309517 = tf(freq=34.0), with freq of:
                  34.0 = termFreq=34.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.046875 = fieldNorm(doc=3197)
          0.04038954 = weight(_text_:22 in 3197) [ClassicSimilarity], result of:
            0.04038954 = score(doc=3197,freq=2.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.23214069 = fieldWeight in 3197, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3197)
      0.33333334 = coord(1/3)
    
    Abstract
    Indexing consists of both novel and more traditional techniques. Cutting-edge indexing techniques, such as automatic indexing, ontologies, and topic maps, were developed independently of older techniques such as thesauri, but it is now recognized that these older methods also hold expertise. Indexing describes various traditional and novel indexing techniques, giving information professionals and students of library and information sciences a broad and comprehensible introduction to indexing. This title consists of twelve chapters: an Introduction to subject readings and theasauri; Automatic indexing versus manual indexing; Techniques applied in automatic indexing of text material; Automatic indexing of images; The black art of indexing moving images; Automatic indexing of music; Taxonomies and ontologies; Metadata formats and indexing; Tagging; Topic maps; Indexing the web; and The Semantic Web.
    Date
    24. 8.2016 14:03:22
    LCSH
    Indexing
    Subject
    Indexing
  2. Kara, S.: ¬An ontology-based retrieval system using semantic indexing (2012) 0.01
    0.013931636 = product of:
      0.041794907 = sum of:
        0.041794907 = product of:
          0.083589815 = sum of:
            0.083589815 = weight(_text_:indexing in 3829) [ClassicSimilarity], result of:
              0.083589815 = score(doc=3829,freq=6.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.4395151 = fieldWeight in 3829, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3829)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In this thesis, we present an ontology-based information extraction and retrieval system and its application to soccer domain. In general, we deal with three issues in semantic search, namely, usability, scalability and retrieval performance. We propose a keyword-based semantic retrieval approach. The performance of the system is improved considerably using domain-specific information extraction, inference and rules. Scalability is achieved by adapting a semantic indexing approach. The system is implemented using the state-of-the-art technologies in SemanticWeb and its performance is evaluated against traditional systems as well as the query expansion methods. Furthermore, a detailed evaluation is provided to observe the performance gain due to domain-specific information extraction and inference. Finally, we show how we use semantic indexing to solve simple structural ambiguities.
  3. Lassalle, E.; Lassalle, E.: Semantic models in information retrieval (2012) 0.01
    0.011609698 = product of:
      0.03482909 = sum of:
        0.03482909 = product of:
          0.06965818 = sum of:
            0.06965818 = weight(_text_:indexing in 97) [ClassicSimilarity], result of:
              0.06965818 = score(doc=97,freq=6.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.3662626 = fieldWeight in 97, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=97)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Robertson and Spärck Jones pioneered experimental probabilistic models (Binary Independence Model) with both a typology generalizing the Boolean model, a frequency counting to calculate elementary weightings, and their combination into a global probabilistic estimation. However, this model did not consider indexing terms dependencies. An extension to mixture models (e.g., using a 2-Poisson law) made it possible to take into account these dependencies from a macroscopic point of view (BM25), as well as a shallow linguistic processing of co-references. New approaches (language models, for example "bag of words" models, probabilistic dependencies between requests and documents, and consequently Bayesian inference using Dirichlet prior conjugate) furnished new solutions for documents structuring (categorization) and for index smoothing. Presently, in these probabilistic models the main issues have been addressed from a formal point of view only. Thus, linguistic properties are neglected in the indexing language. The authors examine how a linguistic and semantic modeling can be integrated in indexing languages and set up a hybrid model that makes it possible to deal with different information retrieval problems in a unified way.
  4. Luo, Y.; Picalausa, F.; Fletcher, G.H.L.; Hidders, J.; Vansummeren, S.: Storing and indexing massive RDF datasets (2012) 0.01
    0.011609698 = product of:
      0.03482909 = sum of:
        0.03482909 = product of:
          0.06965818 = sum of:
            0.06965818 = weight(_text_:indexing in 414) [ClassicSimilarity], result of:
              0.06965818 = score(doc=414,freq=6.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.3662626 = fieldWeight in 414, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=414)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The resource description framework (RDF for short) provides a flexible method for modeling information on the Web [34,40]. All data items in RDF are uniformly represented as triples of the form (subject, predicate, object), sometimes also referred to as (subject, property, value) triples. As a running example for this chapter, a small fragment of an RDF dataset concerning music and music fans is given in Fig. 2.1. Spurred by efforts like the Linking Open Data project, increasingly large volumes of data are being published in RDF. Notable contributors in this respect include areas as diverse as the government, the life sciences, Web 2.0 communities, and so on. To give an idea of the volumes of RDF data concerned, as of September 2012, there are 31,634,213,770 triples in total published by data sources participating in the Linking Open Data project. Many individual data sources (like, e.g., PubMed, DBpedia, MusicBrainz) contain hundreds of millions of triples (797, 672, and 179 millions, respectively). These large volumes of RDF data motivate the need for scalable native RDF data management solutions capabable of efficiently storing, indexing, and querying RDF data. In this chapter, we present a general and up-to-date survey of the current state of the art in RDF storage and indexing.
  5. Papadakis, I. et al.: Highlighting timely information in libraries through social and semantic Web technologies (2016) 0.01
    0.011219318 = product of:
      0.033657953 = sum of:
        0.033657953 = product of:
          0.06731591 = sum of:
            0.06731591 = weight(_text_:22 in 2090) [ClassicSimilarity], result of:
              0.06731591 = score(doc=2090,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.38690117 = fieldWeight in 2090, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2090)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  6. Eckert, K.: SKOS: eine Sprache für die Übertragung von Thesauri ins Semantic Web (2011) 0.01
    0.008975455 = product of:
      0.026926363 = sum of:
        0.026926363 = product of:
          0.053852726 = sum of:
            0.053852726 = weight(_text_:22 in 4331) [ClassicSimilarity], result of:
              0.053852726 = score(doc=4331,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.30952093 = fieldWeight in 4331, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4331)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    15. 3.2011 19:21:22
  7. Vatant, B.: Porting library vocabularies to the Semantic Web, and back : a win-win round trip (2010) 0.01
    0.0080434345 = product of:
      0.024130303 = sum of:
        0.024130303 = product of:
          0.048260607 = sum of:
            0.048260607 = weight(_text_:indexing in 3968) [ClassicSimilarity], result of:
              0.048260607 = score(doc=3968,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.2537542 = fieldWeight in 3968, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3968)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Content
    Vortrag im Rahmen der Session 93. Cataloguing der WORLD LIBRARY AND INFORMATION CONGRESS: 76TH IFLA GENERAL CONFERENCE AND ASSEMBLY, 10-15 August 2010, Gothenburg, Sweden - 149. Information Technology, Cataloguing, Classification and Indexing with Knowledge Management
  8. Rüther, M.; Fock, J.; Schultz-Krutisch, T.; Bandholtz, T.: Classification and reference vocabulary in linked environment data (2011) 0.01
    0.0080434345 = product of:
      0.024130303 = sum of:
        0.024130303 = product of:
          0.048260607 = sum of:
            0.048260607 = weight(_text_:indexing in 4816) [ClassicSimilarity], result of:
              0.048260607 = score(doc=4816,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.2537542 = fieldWeight in 4816, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4816)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The Federal Environment Agency (UBA), Germany, has a long tradition in knowledge organization, using a library along with many Web-based information systems. The backbone of this information space is a classification system enhanced by a reference vocabulary which consists of a thesaurus, a gazetteer and a chronicle. Over the years, classification has increasingly been relegated to the background compared with the reference vocabulary indexing and full text search. Bibliographic items are no longer classified directly but tagged with thesaurus terms, with those terms being classified. Since 2010 we have been developing a linked data representation of this knowledge base. While we are linking bibliographic and observation data with the controlled vocabulary in a Resource Desrcription Framework (RDF) representation, the classification may be revisited as a powerful organization system by inference. This also raises questions about the quality and feasibility of an unambiguous classification of thesaurus terms.
  9. Metadata and semantics research : 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings (2016) 0.01
    0.007853523 = product of:
      0.023560567 = sum of:
        0.023560567 = product of:
          0.047121134 = sum of:
            0.047121134 = weight(_text_:22 in 3283) [ClassicSimilarity], result of:
              0.047121134 = score(doc=3283,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.2708308 = fieldWeight in 3283, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3283)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  10. Semantic search over the Web (2012) 0.01
    0.0075834226 = product of:
      0.022750268 = sum of:
        0.022750268 = product of:
          0.045500536 = sum of:
            0.045500536 = weight(_text_:indexing in 411) [ClassicSimilarity], result of:
              0.045500536 = score(doc=411,freq=4.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.23924173 = fieldWeight in 411, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.03125 = fieldNorm(doc=411)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The Web has become the world's largest database, with search being the main tool that allows organizations and individuals to exploit its huge amount of information. Search on the Web has been traditionally based on textual and structural similarities, ignoring to a large degree the semantic dimension, i.e., understanding the meaning of the query and of the document content. Combining search and semantics gives birth to the idea of semantic search. Traditional search engines have already advertised some semantic dimensions. Some of them, for instance, can enhance their generated result sets with documents that are semantically related to the query terms even though they may not include these terms. Nevertheless, the exploitation of the semantic search has not yet reached its full potential. In this book, Roberto De Virgilio, Francesco Guerra and Yannis Velegrakis present an extensive overview of the work done in Semantic Search and other related areas. They explore different technologies and solutions in depth, making their collection a valuable and stimulating reading for both academic and industrial researchers. The book is divided into three parts. The first introduces the readers to the basic notions of the Web of Data. It describes the different kinds of data that exist, their topology, and their storing and indexing techniques. The second part is dedicated to Web Search. It presents different types of search, like the exploratory or the path-oriented, alongside methods for their efficient and effective implementation. Other related topics included in this part are the use of uncertainty in query answering, the exploitation of ontologies, and the use of semantics in mashup design and operation. The focus of the third part is on linked data, and more specifically, on applying ideas originating in recommender systems on linked data management, and on techniques for the efficiently querying answering on linked data.
    Content
    Inhalt: Introduction.- Part I Introduction to Web of Data.- Topology of the Web of Data.- Storing and Indexing Massive RDF Data Sets.- Designing Exploratory Search Applications upon Web Data Sources.- Part II Search over the Web.- Path-oriented Keyword Search query over RDF.- Interactive Query Construction for Keyword Search on the SemanticWeb.- Understanding the Semantics of Keyword Queries on Relational DataWithout Accessing the Instance.- Keyword-Based Search over Semantic Data.- Semantic Link Discovery over Relational Data.- Embracing Uncertainty in Entity Linking.- The Return of the Entity-Relationship Model: Ontological Query Answering.- Linked Data Services and Semantics-enabled Mashup.- Part III Linked Data Search engines.- A Recommender System for Linked Data.- Flint: from Web Pages to Probabilistic Semantic Data.- Searching and Browsing Linked Data with SWSE.
  11. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.01
    0.0067315903 = product of:
      0.02019477 = sum of:
        0.02019477 = product of:
          0.04038954 = sum of:
            0.04038954 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
              0.04038954 = score(doc=4649,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.23214069 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    26.12.2011 13:40:22
  12. Hooland, S. van; Verborgh, R.; Wilde, M. De; Hercher, J.; Mannens, E.; Wa, R.Van de: Evaluating the success of vocabulary reconciliation for cultural heritage collections (2013) 0.01
    0.0067315903 = product of:
      0.02019477 = sum of:
        0.02019477 = product of:
          0.04038954 = sum of:
            0.04038954 = weight(_text_:22 in 662) [ClassicSimilarity], result of:
              0.04038954 = score(doc=662,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.23214069 = fieldWeight in 662, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=662)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 3.2013 19:29:20
  13. Prud'hommeaux, E.; Gayo, E.: RDF ventures to boldly meet your most pedestrian needs (2015) 0.01
    0.0067315903 = product of:
      0.02019477 = sum of:
        0.02019477 = product of:
          0.04038954 = sum of:
            0.04038954 = weight(_text_:22 in 2024) [ClassicSimilarity], result of:
              0.04038954 = score(doc=2024,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.23214069 = fieldWeight in 2024, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2024)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Bulletin of the Association for Information Science and Technology. 41(2015) no.4, S.18-22
  14. Firnkes, M.: Schöne neue Welt : der Content der Zukunft wird von Algorithmen bestimmt (2015) 0.01
    0.0067315903 = product of:
      0.02019477 = sum of:
        0.02019477 = product of:
          0.04038954 = sum of:
            0.04038954 = weight(_text_:22 in 2118) [ClassicSimilarity], result of:
              0.04038954 = score(doc=2118,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.23214069 = fieldWeight in 2118, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2118)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    5. 7.2015 22:02:31
  15. Waltinger, U.; Mehler, A.; Lösch, M.; Horstmann, W.: Hierarchical classification of OAI metadata using the DDC taxonomy (2011) 0.01
    0.0067028617 = product of:
      0.020108584 = sum of:
        0.020108584 = product of:
          0.04021717 = sum of:
            0.04021717 = weight(_text_:indexing in 4841) [ClassicSimilarity], result of:
              0.04021717 = score(doc=4841,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.21146181 = fieldWeight in 4841, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4841)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In the area of digital library services, the access to subject-specific metadata of scholarly publications is of utmost interest. One of the most prevalent approaches for metadata exchange is the XML-based Open Archive Initiative (OAI) Protocol for Metadata Harvesting (OAI-PMH). However, due to its loose requirements regarding metadata content there is no strict standard for consistent subject indexing specified, which is furthermore needed in the digital library domain. This contribution addresses the problem of automatic enhancement of OAI metadata by means of the most widely used universal classification schemes in libraries-the Dewey Decimal Classification (DDC). To be more specific, we automatically classify scientific documents according to the DDC taxonomy within three levels using a machine learning-based classifier that relies solely on OAI metadata records as the document representation. The results show an asymmetric distribution of documents across the hierarchical structure of the DDC taxonomy and issues of data sparseness. However, the performance of the classifier shows promising results on all three levels of the DDC.
  16. Bergamaschi, S.; Domnori, E.; Guerra, F.; Rota, S.; Lado, R.T.; Velegrakis, Y.: Understanding the semantics of keyword queries on relational data without accessing the instance (2012) 0.01
    0.0067028617 = product of:
      0.020108584 = sum of:
        0.020108584 = product of:
          0.04021717 = sum of:
            0.04021717 = weight(_text_:indexing in 431) [ClassicSimilarity], result of:
              0.04021717 = score(doc=431,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.21146181 = fieldWeight in 431, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=431)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The birth of the Web has brought an exponential growth to the amount of the information that is freely available to the Internet population, overloading users and entangling their efforts to satisfy their information needs. Web search engines such Google, Yahoo, or Bing have become popular mainly due to the fact that they offer an easy-to-use query interface (i.e., based on keywords) and an effective and efficient query execution mechanism. The majority of these search engines do not consider information stored on the deep or hidden Web [9,28], despite the fact that the size of the deep Web is estimated to be much bigger than the surface Web [9,47]. There have been a number of systems that record interactions with the deep Web sources or automatically submit queries them (mainly through their Web form interfaces) in order to index their context. Unfortunately, this technique is only partially indexing the data instance. Moreover, it is not possible to take advantage of the query capabilities of data sources, for example, of the relational query features, because their interface is often restricted from the Web form. Besides, Web search engines focus on retrieving documents and not on querying structured sources, so they are unable to access information based on concepts.
  17. Hogan, A.; Harth, A.; Umbrich, J.; Kinsella, S.; Polleres, A.; Decker, S.: Searching and browsing Linked Data with SWSE : the Semantic Web Search Engine (2011) 0.01
    0.0067028617 = product of:
      0.020108584 = sum of:
        0.020108584 = product of:
          0.04021717 = sum of:
            0.04021717 = weight(_text_:indexing in 438) [ClassicSimilarity], result of:
              0.04021717 = score(doc=438,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.21146181 = fieldWeight in 438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=438)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In this paper, we discuss the architecture and implementation of the Semantic Web Search Engine (SWSE). Following traditional search engine architecture, SWSE consists of crawling, data enhancing, indexing and a user interface for search, browsing and retrieval of information; unlike traditional search engines, SWSE operates over RDF Web data - loosely also known as Linked Data - which implies unique challenges for the system design, architecture, algorithms, implementation and user interface. In particular, many challenges exist in adopting Semantic Web technologies for Web data: the unique challenges of the Web - in terms of scale, unreliability, inconsistency and noise - are largely overlooked by the current Semantic Web standards. Herein, we describe the current SWSE system, initially detailing the architecture and later elaborating upon the function, design, implementation and performance of each individual component. In so doing, we also give an insight into how current Semantic Web standards can be tailored, in a best-effort manner, for use on Web data. Throughout, we offer evaluation and complementary argumentation to support our design choices, and also offer discussion on future directions and open research questions. Later, we also provide candid discussion relating to the difficulties currently faced in bringing such a search engine into the mainstream, and lessons learnt from roughly six years working on the Semantic Web Search Engine project.
  18. Martínez-González, M.M.; Alvite-Díez, M.L.: Thesauri and Semantic Web : discussion of the evolution of thesauri toward their integration with the Semantic Web (2019) 0.01
    0.0067028617 = product of:
      0.020108584 = sum of:
        0.020108584 = product of:
          0.04021717 = sum of:
            0.04021717 = weight(_text_:indexing in 5997) [ClassicSimilarity], result of:
              0.04021717 = score(doc=5997,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.21146181 = fieldWeight in 5997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5997)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Thesauri are Knowledge Organization Systems (KOS), that arise from the consensus of wide communities. They have been in use for many years and are regularly updated. Whereas in the past thesauri were designed for information professionals for indexing and searching, today there is a demand for conceptual vocabularies that enable inferencing by machines. The development of the Semantic Web has brought a new opportunity for thesauri, but thesauri also face the challenge of proving that they add value to it. The evolution of thesauri toward their integration with the Semantic Web is examined. Elements and structures in the thesaurus standard, ISO 25964, and SKOS (Simple Knowledge Organization System), the Semantic Web standard for representing KOS, are reviewed and compared. Moreover, the integrity rules of thesauri are contrasted with the axioms of SKOS. How SKOS has been applied to represent some real thesauri is taken into account. Three thesauri are chosen for this aim: AGROVOC, EuroVoc and the UNESCO Thesaurus. Based on the results of this comparison and analysis, the benefits that Semantic Web technologies offer to thesauri, how thesauri can contribute to the Semantic Web, and the challenges that would help to improve their integration with the Semantic Web are discussed.
  19. Weller, K.: Knowledge representation in the Social Semantic Web (2010) 0.01
    0.0066354945 = product of:
      0.019906484 = sum of:
        0.019906484 = product of:
          0.039812967 = sum of:
            0.039812967 = weight(_text_:indexing in 4515) [ClassicSimilarity], result of:
              0.039812967 = score(doc=4515,freq=4.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.20933652 = fieldWeight in 4515, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4515)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The main purpose of this book is to sum up the vital and highly topical research issue of knowledge representation on the Web and to discuss novel solutions by combining benefits of folksonomies and Web 2.0 approaches with ontologies and semantic technologies. This book contains an overview of knowledge representation approaches in past, present and future, introduction to ontologies, Web indexing and in first case the novel approaches of developing ontologies. This title combines aspects of knowledge representation for both the Semantic Web (ontologies) and the Web 2.0 (folksonomies). Currently there is no monographic book which provides a combined overview over these topics. focus on the topic of using knowledge representation methods for document indexing purposes. For this purpose, considerations from classical librarian interests in knowledge representation (thesauri, classification schemes etc.) are included, which are not part of most other books which have a stronger background in computer science.
  20. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.01
    0.005609659 = product of:
      0.016828977 = sum of:
        0.016828977 = product of:
          0.033657953 = sum of:
            0.033657953 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
              0.033657953 = score(doc=4553,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.19345059 = fieldWeight in 4553, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4553)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    16.11.2018 14:22:01

Languages

  • e 21
  • d 2

Types