Search (35 results, page 1 of 2)

  • × language_ss:"e"
  • × theme_ss:"Semantic Web"
  • × year_i:[2010 TO 2020}
  1. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.01
    0.0054612528 = product of:
      0.054612525 = sum of:
        0.054612525 = product of:
          0.08191878 = sum of:
            0.056981456 = weight(_text_:2010 in 4649) [ClassicSimilarity], result of:
              0.056981456 = score(doc=4649,freq=3.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.38834336 = fieldWeight in 4649, product of:
                  1.7320508 = tf(freq=3.0), with freq of:
                    3.0 = termFreq=3.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
            0.02493733 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
              0.02493733 = score(doc=4649,freq=2.0), product of:
                0.10742335 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03067635 = queryNorm
                0.23214069 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
          0.6666667 = coord(2/3)
      0.1 = coord(1/10)
    
    Date
    26.12.2011 13:40:22
    Year
    2010
  2. ¬The Semantic Web - ISWC 2010 : 9th International Semantic Web Conference, ISWC 2010, Shanghai, China, November 7-11, 2010, Revised Selected Papers, Part 2. (2010) 0.01
    0.0052496144 = product of:
      0.052496143 = sum of:
        0.052496143 = product of:
          0.15748842 = sum of:
            0.15748842 = weight(_text_:2010 in 4706) [ClassicSimilarity], result of:
              0.15748842 = score(doc=4706,freq=33.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                1.0733243 = fieldWeight in 4706, product of:
                  5.7445626 = tf(freq=33.0), with freq of:
                    33.0 = termFreq=33.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4706)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Abstract
    The two-volume set LNCS 6496 and 6497 constitutes the refereed proceedings of the 9th International Semantic Web Conference, ISWC 2010, held in Shanghai, China, during November 7-11, 2010. Part I contains 51 papers out of 578 submissions to the research track. Part II contains 18 papers out of 66 submissions to the semantic Web in-use track, 6 papers out of 26 submissions to the doctoral consortium track, and also 4 invited talks. Each submitted paper were carefully reviewed. The International Semantic Web Conferences (ISWC) constitute the major international venue where the latest research results and technical innovations on all aspects of the Semantic Web are presented. ISWC brings together researchers, practitioners, and users from the areas of artificial intelligence, databases, social networks, distributed computing, Web engineering, information systems, natural language processing, soft computing, and human computer interaction to discuss the major challenges and proposed solutions, the success stories and failures, as well the visions that can advance research and drive innovation in the Semantic Web.
    RSWK
    Semantic Web / Kongress / Schanghai <2010>
    Semantic Web / Ontologie <Wissensverarbeitung> / Kongress / Schanghai <2010>
    Semantic Web / Datenverwaltung / Wissensmanagement / Kongress / Schanghai <2010>
    Semantic Web / Anwendungssystem / Kongress / Schanghai <2010>
    Semantic Web / World Wide Web 2.0 / Kongress / Schanghai <2010>
    Subject
    Semantic Web / Kongress / Schanghai <2010>
    Semantic Web / Ontologie <Wissensverarbeitung> / Kongress / Schanghai <2010>
    Semantic Web / Datenverwaltung / Wissensmanagement / Kongress / Schanghai <2010>
    Semantic Web / Anwendungssystem / Kongress / Schanghai <2010>
    Semantic Web / World Wide Web 2.0 / Kongress / Schanghai <2010>
    Year
    2010
  3. ¬The Semantic Web - ISWC 2010 : 9th International Semantic Web Conference, ISWC 2010, Shanghai, China, November 7-11, 2010, Revised Selected Papers, Part I. (2010) 0.00
    0.004199691 = product of:
      0.04199691 = sum of:
        0.04199691 = product of:
          0.12599073 = sum of:
            0.12599073 = weight(_text_:2010 in 4707) [ClassicSimilarity], result of:
              0.12599073 = score(doc=4707,freq=33.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.85865945 = fieldWeight in 4707, product of:
                  5.7445626 = tf(freq=33.0), with freq of:
                    33.0 = termFreq=33.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4707)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Abstract
    The two-volume set LNCS 6496 and 6497 constitutes the refereed proceedings of the 9th International Semantic Web Conference, ISWC 2010, held in Shanghai, China, during November 7-11, 2010. Part I contains 51 papers out of 578 submissions to the research track. Part II contains 18 papers out of 66 submissions to the semantic Web in-use track, 6 papers out of 26 submissions to the doctoral consortium track, and also 4 invited talks. Each submitted paper were carefully reviewed. The International Semantic Web Conferences (ISWC) constitute the major international venue where the latest research results and technical innovations on all aspects of the Semantic Web are presented. ISWC brings together researchers, practitioners, and users from the areas of artificial intelligence, databases, social networks, distributed computing, Web engineering, information systems, natural language processing, soft computing, and human computer interaction to discuss the major challenges and proposed solutions, the success stories and failures, as well the visions that can advance research and drive innovation in the Semantic Web.
    RSWK
    Semantic Web / Kongress / Schanghai <2010>
    Semantic Web / Ontologie <Wissensverarbeitung> / Kongress / Schanghai <2010>
    Semantic Web / Datenverwaltung / Wissensmanagement / Kongress / Schanghai <2010>
    Semantic Web / Anwendungssystem / Kongress / Schanghai <2010>
    Semantic Web / World Wide Web 2.0 / Kongress / Schanghai <2010>
    Subject
    Semantic Web / Kongress / Schanghai <2010>
    Semantic Web / Ontologie <Wissensverarbeitung> / Kongress / Schanghai <2010>
    Semantic Web / Datenverwaltung / Wissensmanagement / Kongress / Schanghai <2010>
    Semantic Web / Anwendungssystem / Kongress / Schanghai <2010>
    Semantic Web / World Wide Web 2.0 / Kongress / Schanghai <2010>
    Year
    2010
  4. Coyle, K.: Understanding the Semantic Web : bibliographic data and metadata (2010) 0.00
    0.003798764 = product of:
      0.03798764 = sum of:
        0.03798764 = product of:
          0.11396291 = sum of:
            0.11396291 = weight(_text_:2010 in 4169) [ClassicSimilarity], result of:
              0.11396291 = score(doc=4169,freq=3.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.7766867 = fieldWeight in 4169, product of:
                  1.7320508 = tf(freq=3.0), with freq of:
                    3.0 = termFreq=3.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4169)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Year
    2010
  5. Mirizzi, R.; Noia, T. Di: From exploratory search to Web Search and back (2010) 0.00
    0.0029013536 = product of:
      0.029013537 = sum of:
        0.029013537 = product of:
          0.08704061 = sum of:
            0.08704061 = weight(_text_:2010 in 4802) [ClassicSimilarity], result of:
              0.08704061 = score(doc=4802,freq=7.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.59320426 = fieldWeight in 4802, product of:
                  2.6457512 = tf(freq=7.0), with freq of:
                    7.0 = termFreq=7.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4802)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Content
    Beitrag für: PIKM'10, October 30, 2010, Toronto, Ontario, Canada. - Vgl. auch: http://www.inf.unibz.it/krdb/events/swap2010/paper-08.pdf.
    Source
    http://sisinflab.poliba.it/publications/2010/MD10/pikm21-mirizzi.pdf
    Year
    2010
  6. Fripp, D.: Using linked data to classify web documents (2010) 0.00
    0.0028607734 = product of:
      0.028607734 = sum of:
        0.028607734 = product of:
          0.0858232 = sum of:
            0.0858232 = weight(_text_:2010 in 4172) [ClassicSimilarity], result of:
              0.0858232 = score(doc=4172,freq=5.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.5849073 = fieldWeight in 4172, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4172)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Source
    Aslib proceedings. 62(2010) no.6, S.585 - 595
    Year
    2010
  7. Vatant, B.: Porting library vocabularies to the Semantic Web, and back : a win-win round trip (2010) 0.00
    0.0024520915 = product of:
      0.024520915 = sum of:
        0.024520915 = product of:
          0.07356274 = sum of:
            0.07356274 = weight(_text_:2010 in 3968) [ClassicSimilarity], result of:
              0.07356274 = score(doc=3968,freq=5.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.5013491 = fieldWeight in 3968, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3968)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Content
    Vortrag im Rahmen der Session 93. Cataloguing der WORLD LIBRARY AND INFORMATION CONGRESS: 76TH IFLA GENERAL CONFERENCE AND ASSEMBLY, 10-15 August 2010, Gothenburg, Sweden - 149. Information Technology, Cataloguing, Classification and Indexing with Knowledge Management
    Year
    2010
  8. Binding, C.; Tudhope, D.: Terminology Web services (2010) 0.00
    0.0024520915 = product of:
      0.024520915 = sum of:
        0.024520915 = product of:
          0.07356274 = sum of:
            0.07356274 = weight(_text_:2010 in 4067) [ClassicSimilarity], result of:
              0.07356274 = score(doc=4067,freq=5.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.5013491 = fieldWeight in 4067, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4067)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Source
    Knowledge organization. 37(2010) no.4, S.287-298
    Year
    2010
  9. Cahier, J.-P.; Ma, X.; Zaher, L'H.: Document and item-based modeling : a hybrid method for a socio-semantic web (2010) 0.00
    0.0022159456 = product of:
      0.022159455 = sum of:
        0.022159455 = product of:
          0.066478364 = sum of:
            0.066478364 = weight(_text_:2010 in 62) [ClassicSimilarity], result of:
              0.066478364 = score(doc=62,freq=3.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.45306724 = fieldWeight in 62, product of:
                  1.7320508 = tf(freq=3.0), with freq of:
                    3.0 = termFreq=3.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=62)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Year
    2010
  10. Auer, S.; Lehmann, J.: Making the Web a data washing machine : creating knowledge out of interlinked data (2010) 0.00
    0.0020434097 = product of:
      0.020434096 = sum of:
        0.020434096 = product of:
          0.06130229 = sum of:
            0.06130229 = weight(_text_:2010 in 112) [ClassicSimilarity], result of:
              0.06130229 = score(doc=112,freq=5.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.41779095 = fieldWeight in 112, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=112)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Source
    Semantic Web journal. 0(2010), no.1
    Year
    2010
  11. Eiter, T.; Kaminski, T.; Redl, C.; Schüller, P.; Weinzierl, A.: Answer set programming with external source access (2017) 0.00
    0.0017626584 = product of:
      0.017626584 = sum of:
        0.017626584 = product of:
          0.052879747 = sum of:
            0.052879747 = weight(_text_:problem in 3938) [ClassicSimilarity], result of:
              0.052879747 = score(doc=3938,freq=6.0), product of:
                0.1302053 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03067635 = queryNorm
                0.4061259 = fieldWeight in 3938, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3938)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Abstract
    Access to external information is an important need for Answer Set Programming (ASP), which is a booming declarative problem solving approach these days. External access not only includes data in different formats, but more general also the results of computations, and possibly in a two-way information exchange. Providing such access is a major challenge, and in particular if it should be supported at a generic level, both regarding the semantics and efficient computation. In this article, we consider problem solving with ASP under external information access using the dlvhex system. The latter facilitates this access through special external atoms, which are two-way API style interfaces between the rules of the program and an external source. The dlvhex system has a flexible plugin architecture that allows one to use multiple predefined and user-defined external atoms which can be implemented, e.g., in Python or C++. We consider how to solve problems using the ASP paradigm, and specifically discuss how to use external atoms in this context, illustrated by examples. As a showcase, we demonstrate the development of a hex program for a concrete real-world problem using Semantic Web technologies, and discuss specifics of the implementation process.
  12. Cali, A.: Ontology querying : datalog strikes back (2017) 0.00
    0.0017270452 = product of:
      0.017270451 = sum of:
        0.017270451 = product of:
          0.051811352 = sum of:
            0.051811352 = weight(_text_:problem in 3928) [ClassicSimilarity], result of:
              0.051811352 = score(doc=3928,freq=4.0), product of:
                0.1302053 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03067635 = queryNorm
                0.39792046 = fieldWeight in 3928, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3928)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Abstract
    In this tutorial we address the problem of ontology querying, that is, the problem of answering queries against a theory constituted by facts (the data) and inference rules (the ontology). A varied landscape of ontology languages exists in the scientific literature, with several degrees of complexity of query processing. We argue that Datalog±, a family of languages derived from Datalog, is a powerful tool for ontology querying. To illustrate the impact of this comeback of Datalog, we present the basic paradigms behind the main Datalog± as well as some recent extensions. We also present some efficient query processing techniques for some cases.
  13. Rüther, M.; Fock, J.; Schultz-Krutisch, T.; Bandholtz, T.: Classification and reference vocabulary in linked environment data (2011) 0.00
    0.0015508389 = product of:
      0.015508389 = sum of:
        0.015508389 = product of:
          0.046525165 = sum of:
            0.046525165 = weight(_text_:2010 in 4816) [ClassicSimilarity], result of:
              0.046525165 = score(doc=4816,freq=2.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.31708103 = fieldWeight in 4816, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4816)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Abstract
    The Federal Environment Agency (UBA), Germany, has a long tradition in knowledge organization, using a library along with many Web-based information systems. The backbone of this information space is a classification system enhanced by a reference vocabulary which consists of a thesaurus, a gazetteer and a chronicle. Over the years, classification has increasingly been relegated to the background compared with the reference vocabulary indexing and full text search. Bibliographic items are no longer classified directly but tagged with thesaurus terms, with those terms being classified. Since 2010 we have been developing a linked data representation of this knowledge base. While we are linking bibliographic and observation data with the controlled vocabulary in a Resource Desrcription Framework (RDF) representation, the classification may be revisited as a powerful organization system by inference. This also raises questions about the quality and feasibility of an unambiguous classification of thesaurus terms.
  14. Rousset, M.-C.; Atencia, M.; David, J.; Jouanot, F.; Ulliana, F.; Palombi, O.: Datalog revisited for reasoning in linked data (2017) 0.00
    0.0014392043 = product of:
      0.0143920425 = sum of:
        0.0143920425 = product of:
          0.043176126 = sum of:
            0.043176126 = weight(_text_:problem in 3936) [ClassicSimilarity], result of:
              0.043176126 = score(doc=3936,freq=4.0), product of:
                0.1302053 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03067635 = queryNorm
                0.33160037 = fieldWeight in 3936, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3936)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Abstract
    Linked Data provides access to huge, continuously growing amounts of open data and ontologies in RDF format that describe entities, links and properties on those entities. Equipping Linked Data with inference paves the way to make the Semantic Web a reality. In this survey, we describe a unifying framework for RDF ontologies and databases that we call deductive RDF triplestores. It consists in equipping RDF triplestores with Datalog inference rules. This rule language allows to capture in a uniform manner OWL constraints that are useful in practice, such as property transitivity or symmetry, but also domain-specific rules with practical relevance for users in many domains of interest. The expressivity and the genericity of this framework is illustrated for modeling Linked Data applications and for developing inference algorithms. In particular, we show how it allows to model the problem of data linkage in Linked Data as a reasoning problem on possibly decentralized data. We also explain how it makes possible to efficiently extract expressive modules from Semantic Web ontologies and databases with formal guarantees, whilst effectively controlling their succinctness. Experiments conducted on real-world datasets have demonstrated the feasibility of this approach and its usefulness in practice for data integration and information extraction.
  15. Weller, K.: Knowledge representation in the Social Semantic Web (2010) 0.00
    0.0014303867 = product of:
      0.014303867 = sum of:
        0.014303867 = product of:
          0.0429116 = sum of:
            0.0429116 = weight(_text_:2010 in 4515) [ClassicSimilarity], result of:
              0.0429116 = score(doc=4515,freq=5.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.29245365 = fieldWeight in 4515, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4515)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Content
    Zugl.: Düsseldorf, Univ., Diss., 2010
    Year
    2010
  16. Rajabi, E.; Sanchez-Alonso, S.; Sicilia, M.-A.: Analyzing broken links on the web of data : An experiment with DBpedia (2014) 0.00
    0.0014247395 = product of:
      0.014247394 = sum of:
        0.014247394 = product of:
          0.04274218 = sum of:
            0.04274218 = weight(_text_:problem in 1330) [ClassicSimilarity], result of:
              0.04274218 = score(doc=1330,freq=2.0), product of:
                0.1302053 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03067635 = queryNorm
                0.3282676 = fieldWeight in 1330, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1330)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Abstract
    Linked open data allow interlinking and integrating any kind of data on the web. Links between various data sources play a key role insofar as they allow software applications (e.g., browsers, search engines) to operate over the aggregated data space as if it was a unique local database. In this new data space, where DBpedia, a data set including structured information from Wikipedia, seems to be the central hub, we analyzed and highlighted outgoing links from this hub in an effort to discover broken links. The paper reports on an experiment to examine the causes of broken links and proposes some treatments for solving this problem.
  17. Papadakis, I. et al.: Highlighting timely information in libraries through social and semantic Web technologies (2016) 0.00
    0.0013854074 = product of:
      0.013854073 = sum of:
        0.013854073 = product of:
          0.04156222 = sum of:
            0.04156222 = weight(_text_:22 in 2090) [ClassicSimilarity], result of:
              0.04156222 = score(doc=2090,freq=2.0), product of:
                0.10742335 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03067635 = queryNorm
                0.38690117 = fieldWeight in 2090, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2090)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  18. Djioua, B.; Desclés, J.-P.; Alrahabi, M.: Searching and mining with semantic categories (2012) 0.00
    0.0012923657 = product of:
      0.012923657 = sum of:
        0.012923657 = product of:
          0.03877097 = sum of:
            0.03877097 = weight(_text_:2010 in 99) [ClassicSimilarity], result of:
              0.03877097 = score(doc=99,freq=2.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.2642342 = fieldWeight in 99, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=99)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Abstract
    A new model is proposed to retrieve information by building automatically a semantic metatext structure for texts that allow searching and extracting discourse and semantic information according to certain linguistic categorizations. This paper presents approaches for searching and mining full text with semantic categories. The model is built up from two engines: The first one, called EXCOM (Djioua et al., 2006; Alrahabi, 2010), is an automatic system for text annotation, related to discourse and semantic maps, which are specification of general linguistic ontologies founded on the Applicative and Cognitive Grammar. The annotation layer uses a linguistic method called Contextual Exploration, which handles the polysemic values of a term in texts. Several 'semantic maps' underlying 'point of views' for text mining guide this automatic annotation process. The second engine uses semantic annotated texts, produced previously in order to create a semantic inverted index, which is able to retrieve relevant documents for queries associated with discourse and semantic categories such as definition, quotation, causality, relations between concepts, etc. (Djioua & Desclés, 2007). This semantic indexation process builds a metatext layer for textual contents. Some data and linguistic rules sets as well as the general architecture that extend third-party software are expressed as supplementary information.
  19. Mirizzi, R.: Exploratory browsing in the Web of Data (2011) 0.00
    0.0012793768 = product of:
      0.012793767 = sum of:
        0.012793767 = product of:
          0.0383813 = sum of:
            0.0383813 = weight(_text_:2010 in 4803) [ClassicSimilarity], result of:
              0.0383813 = score(doc=4803,freq=4.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.2615785 = fieldWeight in 4803, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4803)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Source
    http://sisinflab.poliba.it/publications/2010/Mir10/gii-2010.pdf
  20. Baker, T.; Bermès, E.; Coyle, K.; Dunsire, G.; Isaac, A.; Murray, P.; Panzer, M.; Schneider, J.; Singer, R.; Summers, E.; Waites, W.; Young, J.; Zeng, M.: Library Linked Data Incubator Group Final Report (2011) 0.00
    0.0010338926 = product of:
      0.010338926 = sum of:
        0.010338926 = product of:
          0.031016776 = sum of:
            0.031016776 = weight(_text_:2010 in 4796) [ClassicSimilarity], result of:
              0.031016776 = score(doc=4796,freq=2.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.21138735 = fieldWeight in 4796, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4796)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Abstract
    The mission of the W3C Library Linked Data Incubator Group, chartered from May 2010 through August 2011, has been "to help increase global interoperability of library data on the Web, by bringing together people involved in Semantic Web activities - focusing on Linked Data - in the library community and beyond, building on existing initiatives, and identifying collaboration tracks for the future." In Linked Data [LINKEDDATA], data is expressed using standards such as Resource Description Framework (RDF) [RDF], which specifies relationships between things, and Uniform Resource Identifiers (URIs, or "Web addresses") [URI]. This final report of the Incubator Group examines how Semantic Web standards and Linked Data principles can be used to make the valuable information assets that library create and curate - resources such as bibliographic data, authorities, and concept schemes - more visible and re-usable outside of their original library context on the wider Web. The Incubator Group began by eliciting reports on relevant activities from parties ranging from small, independent projects to national library initiatives (see the separate report, Library Linked Data Incubator Group: Use Cases) [USECASE]. These use cases provided the starting point for the work summarized in the report: an analysis of the benefits of library Linked Data, a discussion of current issues with regard to traditional library data, existing library Linked Data initiatives, and legal rights over library data; and recommendations for next steps. The report also summarizes the results of a survey of current Linked Data technologies and an inventory of library Linked Data resources available today (see also the more detailed report, Library Linked Data Incubator Group: Datasets, Value Vocabularies, and Metadata Element Sets) [VOCABDATASET].

Types