Search (115 results, page 2 of 6)

  • × language_ss:"e"
  • × theme_ss:"Semantic Web"
  • × year_i:[2010 TO 2020}
  1. Corcho, O.; Poveda-Villalón, M.; Gómez-Pérez, A.: Ontology engineering in the era of linked data (2015) 0.01
    0.0068817483 = product of:
      0.01720437 = sum of:
        0.011678694 = weight(_text_:a in 3293) [ClassicSimilarity], result of:
          0.011678694 = score(doc=3293,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.21843673 = fieldWeight in 3293, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3293)
        0.005525676 = product of:
          0.011051352 = sum of:
            0.011051352 = weight(_text_:information in 3293) [ClassicSimilarity], result of:
              0.011051352 = score(doc=3293,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13576832 = fieldWeight in 3293, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3293)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Ontology engineering encompasses the method, tools and techniques used to develop ontologies. Without requiring ontologies, linked data is driving a paradigm shift, bringing benefits and drawbacks to the publishing world. Ontologies may be heavyweight, supporting deep understanding of a domain, or lightweight, suited to simple classification of concepts and more adaptable for linked data. They also vary in domain specificity, usability and reusabilty. Hybrid vocabularies drawing elements from diverse sources often suffer from internally incompatible semantics. To serve linked data purposes, ontology engineering teams require a range of skills in philosophy, computer science, web development, librarianship and domain expertise.
    Footnote
    Contribution to a special section "Linked data and the charm of weak semantics".
    Source
    Bulletin of the Association for Information Science and Technology. 41(2015) no.4, S.13-17
    Type
    a
  2. Djioua, B.; Desclés, J.-P.; Alrahabi, M.: Searching and mining with semantic categories (2012) 0.01
    0.0067616524 = product of:
      0.01690413 = sum of:
        0.009010308 = weight(_text_:a in 99) [ClassicSimilarity], result of:
          0.009010308 = score(doc=99,freq=14.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1685276 = fieldWeight in 99, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=99)
        0.007893822 = product of:
          0.015787644 = sum of:
            0.015787644 = weight(_text_:information in 99) [ClassicSimilarity], result of:
              0.015787644 = score(doc=99,freq=8.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.19395474 = fieldWeight in 99, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=99)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    A new model is proposed to retrieve information by building automatically a semantic metatext structure for texts that allow searching and extracting discourse and semantic information according to certain linguistic categorizations. This paper presents approaches for searching and mining full text with semantic categories. The model is built up from two engines: The first one, called EXCOM (Djioua et al., 2006; Alrahabi, 2010), is an automatic system for text annotation, related to discourse and semantic maps, which are specification of general linguistic ontologies founded on the Applicative and Cognitive Grammar. The annotation layer uses a linguistic method called Contextual Exploration, which handles the polysemic values of a term in texts. Several 'semantic maps' underlying 'point of views' for text mining guide this automatic annotation process. The second engine uses semantic annotated texts, produced previously in order to create a semantic inverted index, which is able to retrieve relevant documents for queries associated with discourse and semantic categories such as definition, quotation, causality, relations between concepts, etc. (Djioua & Desclés, 2007). This semantic indexation process builds a metatext layer for textual contents. Some data and linguistic rules sets as well as the general architecture that extend third-party software are expressed as supplementary information.
    Source
    Next generation search engines: advanced models for information retrieval. Eds.: C. Jouis, u.a
    Type
    a
  3. Stamou, G.; Chortaras, A.: Ontological query answering over semantic data (2017) 0.01
    0.006654713 = product of:
      0.016636781 = sum of:
        0.00770594 = weight(_text_:a in 3926) [ClassicSimilarity], result of:
          0.00770594 = score(doc=3926,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.14413087 = fieldWeight in 3926, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=3926)
        0.0089308405 = product of:
          0.017861681 = sum of:
            0.017861681 = weight(_text_:information in 3926) [ClassicSimilarity], result of:
              0.017861681 = score(doc=3926,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.21943474 = fieldWeight in 3926, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3926)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Modern information retrieval systems advance user experience on the basis of concept-based rather than keyword-based query answering.
    Series
    Lecture Notes in Computer Scienc;10370) (Information Systems and Applications, incl. Internet/Web, and HCI
    Type
    a
  4. Smith, D.A.: Exploratory and faceted browsing over heterogeneous and cross-domain data sources. (2011) 0.01
    0.0066203782 = product of:
      0.016550945 = sum of:
        0.007078358 = weight(_text_:a in 4839) [ClassicSimilarity], result of:
          0.007078358 = score(doc=4839,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.13239266 = fieldWeight in 4839, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4839)
        0.009472587 = product of:
          0.018945174 = sum of:
            0.018945174 = weight(_text_:information in 4839) [ClassicSimilarity], result of:
              0.018945174 = score(doc=4839,freq=8.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.23274569 = fieldWeight in 4839, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4839)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Exploration of heterogeneous data sources increases the value of information by allowing users to answer questions through exploration across multiple sources; Users can use information that has been posted across the Web to answer questions and learn about new domains. We have conducted research that lowers the interrogation time of faceted data, by combining related information from different sources. The work contributes methodologies in combining heterogenous sources, and how to deliver that data to a user interface scalably, with enough performance to support rapid interrogation of the knowledge by the user. The work also contributes how to combine linked data sources so that users can create faceted browsers that target the information facets of their needs. The work is grounded and proven in a number of experiments and test cases that study the contributions in domain research work.
    Footnote
    A thesis submitted in partial fulfillment for the degree of Doctor of Philosophy. June 2011.
  5. Oliveira Machado, L.M.; Souza, R.R.; Simões, M. da Graça: Semantic web or web of data? : a diachronic study (1999 to 2017) of the publications of Tim Berners-Lee and the World Wide Web Consortium (2019) 0.01
    0.0065874713 = product of:
      0.016468678 = sum of:
        0.009632425 = weight(_text_:a in 5300) [ClassicSimilarity], result of:
          0.009632425 = score(doc=5300,freq=16.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18016359 = fieldWeight in 5300, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5300)
        0.006836252 = product of:
          0.013672504 = sum of:
            0.013672504 = weight(_text_:information in 5300) [ClassicSimilarity], result of:
              0.013672504 = score(doc=5300,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16796975 = fieldWeight in 5300, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5300)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The web has been, in the last decades, the place where information retrieval achieved its maximum importance, given its ubiquity and the sheer volume of information. However, its exponential growth made the retrieval task increasingly hard, relying in its effectiveness on idiosyncratic and somewhat biased ranking algorithms. To deal with this problem, a "new" web, called the Semantic Web (SW), was proposed, bringing along concepts like "Web of Data" and "Linked Data," although the definitions and connections among these concepts are often unclear. Based on a qualitative approach built over a literature review, a definition of SW is presented, discussing the related concepts sometimes used as synonyms. It concludes that the SW is a comprehensive and ambitious construct that includes the great purpose of making the web a global database. It also follows the specifications developed and/or associated with its operationalization and the necessary procedures for the connection of data in an open format on the web. The goals of this comprehensive SW are the union of two outcomes still tenuously connected: the virtually unlimited possibility of connections between data-the web domain-with the potentiality of the automated inference of "intelligent" systems-the semantic component.
    Source
    Journal of the Association for Information Science and Technology. 70(2019) no.7, S.701-714
    Type
    a
  6. Chaudhury, S.; Mallik, A.; Ghosh, H.: Multimedia ontology : representation and applications (2016) 0.01
    0.0064942986 = product of:
      0.016235746 = sum of:
        0.008341924 = weight(_text_:a in 2801) [ClassicSimilarity], result of:
          0.008341924 = score(doc=2801,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15602624 = fieldWeight in 2801, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2801)
        0.007893822 = product of:
          0.015787644 = sum of:
            0.015787644 = weight(_text_:information in 2801) [ClassicSimilarity], result of:
              0.015787644 = score(doc=2801,freq=8.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.19395474 = fieldWeight in 2801, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2801)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The book covers multimedia ontology in heritage preservation with intellectual explorations of various themes of Indian cultural heritage. The result of more than 15 years of collective research, Multimedia Ontology: Representation and Applications provides a theoretical foundation for understanding the nature of media data and the principles involved in its interpretation. The book presents a unified approach to recent advances in multimedia and explains how a multimedia ontology can fill the semantic gap between concepts and the media world. It relays real-life examples of implementations in different domains to illustrate how this gap can be filled. The book contains information that helps with building semantic, content-based search and retrieval engines and also with developing vertical application-specific search applications. It guides you in designing multimedia tools that aid in logical and conceptual organization of large amounts of multimedia data. As a practical demonstration, it showcases multimedia applications in cultural heritage preservation efforts and the creation of virtual museums. The book describes the limitations of existing ontology techniques in semantic multimedia data processing, as well as some open problems in the representations and applications of multimedia ontology. As an antidote, it introduces new ontology representation and reasoning schemes that overcome these limitations. The long, compiled efforts reflected in Multimedia Ontology: Representation and Applications are a signpost for new achievements and developments in efficiency and accessibility in the field.
    Footnote
    Rez. in: Annals of Library and Information Studies 62(2015) no.4, S.299-300 (A.K. Das)
    LCSH
    Information storage and retrieval systems
    Subject
    Information storage and retrieval systems
  7. Virgilio, R. De; Cappellari, P.; Maccioni, A.; Torlone, R.: Path-oriented keyword search query over RDF (2012) 0.01
    0.006338624 = product of:
      0.01584656 = sum of:
        0.009010308 = weight(_text_:a in 429) [ClassicSimilarity], result of:
          0.009010308 = score(doc=429,freq=14.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1685276 = fieldWeight in 429, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=429)
        0.006836252 = product of:
          0.013672504 = sum of:
            0.013672504 = weight(_text_:information in 429) [ClassicSimilarity], result of:
              0.013672504 = score(doc=429,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16796975 = fieldWeight in 429, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=429)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    We are witnessing a smooth evolution of the Web from a worldwide information space of linked documents to a global knowledge base, where resources are identified by means of uniform resource identifiers (URIs, essentially string identifiers) and are semantically described and correlated through resource description framework (RDF, a metadata data model) statements. With the size and availability of data constantly increasing (currently around 7 billion RDF triples and 150 million RDF links), a fundamental problem lies in the difficulty users face to find and retrieve the information they are interested in. In general, to access semantic data, users need to know the organization of data and the syntax of a specific query language (e.g., SPARQL or variants thereof). Clearly, this represents an obstacle to information access for nonexpert users. For this reason, keyword search-based systems are increasingly capturing the attention of researchers. Recently, many approaches to keyword-based search over structured and semistructured data have been proposed]. These approaches usually implement IR strategies on top of traditional database management systems with the goal of freeing the users from having to know data organization and query languages.
  8. Gómez-Pérez, A.; Corcho, O.: Ontology languages for the Semantic Web (2015) 0.01
    0.006338624 = product of:
      0.01584656 = sum of:
        0.009010308 = weight(_text_:a in 3297) [ClassicSimilarity], result of:
          0.009010308 = score(doc=3297,freq=14.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1685276 = fieldWeight in 3297, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3297)
        0.006836252 = product of:
          0.013672504 = sum of:
            0.013672504 = weight(_text_:information in 3297) [ClassicSimilarity], result of:
              0.013672504 = score(doc=3297,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16796975 = fieldWeight in 3297, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3297)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Ontologies have proven to be an essential element in many applications. They are used in agent systems, knowledge management systems, and e-commerce platforms. They can also generate natural language, integrate intelligent information, provide semantic-based access to the Internet, and extract information from texts in addition to being used in many other applications to explicitly declare the knowledge embedded in them. However, not only are ontologies useful for applications in which knowledge plays a key role, but they can also trigger a major change in current Web contents. This change is leading to the third generation of the Web-known as the Semantic Web-which has been defined as "the conceptual structuring of the Web in an explicit machine-readable way."1 This definition does not differ too much from the one used for defining an ontology: "An ontology is an explicit, machinereadable specification of a shared conceptualization."2 In fact, new ontology-based applications and knowledge architectures are developing for this new Web. A common claim for all of these approaches is the need for languages to represent the semantic information that this Web requires-solving the heterogeneous data exchange in this heterogeneous environment. Here, we don't decide which language is best of the Semantic Web. Rather, our goal is to help developers find the most suitable language for their representation needs. The authors analyze the most representative ontology languages created for the Web and compare them using a common framework.
    Type
    a
  9. LeBoeuf, P.: ¬A strange model named FRBRoo (2012) 0.01
    0.006334501 = product of:
      0.015836252 = sum of:
        0.009138121 = weight(_text_:a in 1904) [ClassicSimilarity], result of:
          0.009138121 = score(doc=1904,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1709182 = fieldWeight in 1904, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=1904)
        0.0066981306 = product of:
          0.013396261 = sum of:
            0.013396261 = weight(_text_:information in 1904) [ClassicSimilarity], result of:
              0.013396261 = score(doc=1904,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16457605 = fieldWeight in 1904, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1904)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Libraries and museums developed rules for the description of their collections prior to formalizing the underlying conceptualization reflected in such rules. That formalizing process took place in the 1990s and resulted in two independent conceptual models: FRBR for bibliographic information (published in 1998), and CIDOC CRM for museum information (developed from 1996 on, and issued as ISO standard 21127 in 2006). An international working group was formed in 2003 with the purpose of harmonizing these two models. The resulting model, FRBROO, was published in 2009. It is an extension to CIDOC CRM, using the formalism in which the former is written. It adds to FRBR the dynamic aspects of CIDOC CRM, and a number of refinements (e.g. in the definitions of Work and Manifestation). Some modifications were made in CIDOC CRM as well. FRBROO was developed with Semantic Web technologies in mind, and lends itself well to the Linked Data environment; but will it be used in that context?
    Content
    Contribution to a special issue "The FRBR family of conceptual models: toward a linked future"
    Type
    a
  10. Baker, T.; Sutton, S.A.: Linked data and the charm of weak semantics : Introduction: the strengths of weak semantics (2015) 0.01
    0.0063194023 = product of:
      0.015798505 = sum of:
        0.01021673 = weight(_text_:a in 2022) [ClassicSimilarity], result of:
          0.01021673 = score(doc=2022,freq=18.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.19109234 = fieldWeight in 2022, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2022)
        0.0055817757 = product of:
          0.011163551 = sum of:
            0.011163551 = weight(_text_:information in 2022) [ClassicSimilarity], result of:
              0.011163551 = score(doc=2022,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13714671 = fieldWeight in 2022, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2022)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Logic and precision are fundamental to ontologies underlying the semantic web and, by extension, to linked data. This special section focuses on the interaction of semantics, ontologies and linked data. The discussion presents the Simple Knowledge Organization Scheme (SKOS) as a less formal strategy for expressing concept hierarchies and associations and questions the value of deep domain ontologies in favor of simpler vocabularies that are more open to reuse, albeit risking illogical outcomes. RDF ontologies harbor another unexpected drawback. While structurally sound, they leave validation gaps permitting illogical uses, a problem being addressed by a W3C Working Group. Data models based on RDF graphs and properties may replace traditional library catalog models geared to predefined entities, with relationships between RDF classes providing the semantic connections. The BIBFRAME Initiative takes a different and streamlined approach to linking data, building rich networks of information resources rather than relying on a strict underlying structure and vocabulary. Taken together, the articles illustrate the trend toward a pragmatic approach to a Semantic Web, sacrificing some specificity for greater flexibility and partial interoperability.
    Footnote
    Introduction to a special section "Linked data and the charm of weak semantics".
    Source
    Bulletin of the Association for Information Science and Technology. 41(2015) no.4, S.10-12
    Type
    a
  11. Blanco, L.; Bronzi, M.; Crescenzi, V.; Merialdo, P.; Papotti, P.: Flint: from Web pages to probabilistic semantic data (2012) 0.01
    0.006203569 = product of:
      0.015508923 = sum of:
        0.0076151006 = weight(_text_:a in 437) [ClassicSimilarity], result of:
          0.0076151006 = score(doc=437,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.14243183 = fieldWeight in 437, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=437)
        0.007893822 = product of:
          0.015787644 = sum of:
            0.015787644 = weight(_text_:information in 437) [ClassicSimilarity], result of:
              0.015787644 = score(doc=437,freq=8.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.19395474 = fieldWeight in 437, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=437)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The Web is a surprisingly extensive source of information: it offers a huge number of sites containing data about a disparate range of topics. Although Web pages are built for human fruition, not for automatic processing of the data, we observe that an increasing number of Web sites deliver pages containing structured information about recognizable concepts, relevant to specific application domains, such as movies, finance, sport, products, etc. The development of scalable techniques to discover, extract, and integrate data from fairly structured large corpora available on the Web is a challenging issue, because to face the Web scale, these activities should be accomplished automatically by domain-independent techniques. To cope with the complexity and the heterogeneity of Web data, state-of-the-art approaches focus on information organized according to specific patterns that frequently occur on the Web. Meaningful examples are WebTables, which focuses on data published in HTML tables, and information extraction systems, such as TextRunner, which exploits lexical-syntactic patterns. As noticed by Cafarella et al., even if a small fraction of the Web is organized according to these patterns, due to the Web scale, the amount of data involved is impressive. In this chapter, we focus on methods and techniques to wring out value from the data delivered by large data-intensive Web sites.
  12. Almeida, M.; Souza, R.; Fonseca, F.: Semantics in the Semantic Web : a critical evaluation (2011) 0.01
    0.0060856803 = product of:
      0.015214201 = sum of:
        0.009632425 = weight(_text_:a in 4293) [ClassicSimilarity], result of:
          0.009632425 = score(doc=4293,freq=16.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18016359 = fieldWeight in 4293, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4293)
        0.0055817757 = product of:
          0.011163551 = sum of:
            0.011163551 = weight(_text_:information in 4293) [ClassicSimilarity], result of:
              0.011163551 = score(doc=4293,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13714671 = fieldWeight in 4293, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4293)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    In recent years, the term "semantics" has been widely used in various fields of research and particularly in areas related to information technology. One of the motivators of such an appropriation is the vision of the Semantic Web, a set of developments underway, which might allow one to obtain better results when querying on the web. However, it is worth asking what kind of semantics we can find in the Semantic Web, considering that studying the subject is a complex and controversial endeavor. Working within this context, we present an account of semantics, relying on the main linguist approaches, in order to then analyze what semantics is within the scope of information technology. We critically evaluate a spectrum, which proposes the ordination of instruments (models, languages, taxonomic structures, to mention but a few) according to a semantic scale. In addition to proposing a new extended spectrum, we suggest alternative interpretations with the aim of clarifying the use of the term "semantics" in different contexts. Finally, we offer our conclusions regarding the semantic in the Semantic Web and mention future directions and complementary works.
    Type
    a
  13. Ioannou, E.; Nejdl, W.; Niederée, C.; Velegrakis, Y.: Embracing uncertainty in entity linking (2012) 0.01
    0.0060712704 = product of:
      0.015178176 = sum of:
        0.008341924 = weight(_text_:a in 433) [ClassicSimilarity], result of:
          0.008341924 = score(doc=433,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15602624 = fieldWeight in 433, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=433)
        0.006836252 = product of:
          0.013672504 = sum of:
            0.013672504 = weight(_text_:information in 433) [ClassicSimilarity], result of:
              0.013672504 = score(doc=433,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16796975 = fieldWeight in 433, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=433)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The modern Web has grown from a publishing place of well-structured data and HTML pages for companies and experienced users into a vivid publishing and data exchange community in which everyone can participate, both as a data consumer and as a data producer. Unavoidably, the data available on the Web became highly heterogeneous, ranging from highly structured and semistructured to highly unstructured user-generated content, reflecting different perspectives and structuring principles. The full potential of such data can only be realized by combining information from multiple sources. For instance, the knowledge that is typically embedded in monolithic applications can be outsourced and, thus, used also in other applications. Numerous systems nowadays are already actively utilizing existing content from various sources such as WordNet or Wikipedia. Some well-known examples of such systems include DBpedia, Freebase, Spock, and DBLife. A major challenge during combining and querying information from multiple heterogeneous sources is entity linkage, i.e., the ability to detect whether two pieces of information correspond to the same real-world object. This chapter introduces a novel approach for addressing the entity linkage problem for heterogeneous, uncertain, and volatile data.
  14. Cahier, J.-P.; Ma, X.; Zaher, L'H.: Document and item-based modeling : a hybrid method for a socio-semantic web (2010) 0.01
    0.0060245167 = product of:
      0.015061291 = sum of:
        0.009535614 = weight(_text_:a in 62) [ClassicSimilarity], result of:
          0.009535614 = score(doc=62,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17835285 = fieldWeight in 62, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=62)
        0.005525676 = product of:
          0.011051352 = sum of:
            0.011051352 = weight(_text_:information in 62) [ClassicSimilarity], result of:
              0.011051352 = score(doc=62,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13576832 = fieldWeight in 62, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=62)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The paper discusses the challenges of categorising documents and "items of the world" to promote knowledge sharing in large communities of interest. We present the DOCMA method (Document and Item-based Model for Action) dedicated to end-users who have minimal or no knowledge of information science. Community members can elicit structure and indexed business items stemming from their query including projects, actors, products, places of interest, and geo-situated objects. This hybrid method was applied in a collaborative Web portal in the field of sustainability for the past two years.
    Type
    a
  15. Fripp, D.: Using linked data to classify web documents (2010) 0.01
    0.0060245167 = product of:
      0.015061291 = sum of:
        0.009535614 = weight(_text_:a in 4172) [ClassicSimilarity], result of:
          0.009535614 = score(doc=4172,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17835285 = fieldWeight in 4172, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4172)
        0.005525676 = product of:
          0.011051352 = sum of:
            0.011051352 = weight(_text_:information in 4172) [ClassicSimilarity], result of:
              0.011051352 = score(doc=4172,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13576832 = fieldWeight in 4172, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4172)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Purpose - The purpose of this paper is to find a relationship between traditional faceted classification schemes and semantic web document annotators, particularly in the linked data environment. Design/methodology/approach - A consideration of the conceptual ideas behind faceted classification and linked data architecture is made. Analysis of selected web documents is performed using Calais' Semantic Proxy to support the considerations. Findings - Technical language aside, the principles of both approaches are very similar. Modern classification techniques have the potential to automatically generate metadata to drive more precise information recall by including a semantic layer. Originality/value - Linked data have not been explicitly considered in this context before in the published literature.
    Type
    a
  16. Mirizzi, R.: Exploratory browsing in the Web of Data (2011) 0.01
    0.005903398 = product of:
      0.014758496 = sum of:
        0.009232819 = weight(_text_:a in 4803) [ClassicSimilarity], result of:
          0.009232819 = score(doc=4803,freq=30.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17268941 = fieldWeight in 4803, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4803)
        0.005525676 = product of:
          0.011051352 = sum of:
            0.011051352 = weight(_text_:information in 4803) [ClassicSimilarity], result of:
              0.011051352 = score(doc=4803,freq=8.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13576832 = fieldWeight in 4803, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4803)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Thanks to the recent Linked Data initiative, the foundations of the Semantic Web have been built. Shared, open and linked RDF datasets give us the possibility to exploit both the strong theoretical results and the robust technologies and tools developed since the seminal paper in the Semantic Web appeared in 2001. In a simplistic way, we may think at the Semantic Web as a ultra large distributed database we can query to get information coming from different sources. In fact, every dataset exposes a SPARQL endpoint to make the data accessible through exact queries. If we know the URI of the famous actress Nicole Kidman in DBpedia we may retrieve all the movies she acted with a simple SPARQL query. Eventually we may aggregate this information with users ratings and genres from IMDB. Even though these are very exciting results and applications, there is much more behind the curtains. Datasets come with the description of their schema structured in an ontological way. Resources refer to classes which are in turn organized in well structured and rich ontologies. Exploiting also this further feature we go beyond the notion of a distributed database and we can refer to the Semantic Web as a distributed knowledge base. If in our knowledge base we have that Paris is located in France (ontological level) and that Moulin Rouge! is set in Paris (data level) we may query the Semantic Web (interpreted as a set of interconnected datasets and related ontologies) to return all the movies starred by Nicole Kidman set in France and Moulin Rouge! will be in the final result set. The ontological level makes possible to infer new relations among data.
    The Linked Data initiative and the state of the art in semantic technologies led off all brand new search and mash-up applications. The basic idea is to have smarter lookup services for a huge, distributed and social knowledge base. All these applications catch and (re)propose, under a semantic data perspective, the view of the classical Web as a distributed collection of documents to retrieve. The interlinked nature of the Web, and consequently of the Semantic Web, is exploited (just) to collect and aggregate data coming from different sources. Of course, this is a big step forward in search and Web technologies, but if we limit our investi- gation to retrieval tasks, we miss another important feature of the current Web: browsing and in particular exploratory browsing (a.k.a. exploratory search). Thanks to its hyperlinked nature, the Web defined a new way of browsing documents and knowledge: selection by lookup, navigation and trial-and-error tactics were, and still are, exploited by users to search for relevant information satisfying some initial requirements. The basic assumptions behind a lookup search, typical of Information Retrieval (IR) systems, are no more valid in an exploratory browsing context. An IR system, such as a search engine, assumes that: the user has a clear picture of what she is looking for ; she knows the terminology of the specific knowledge space. On the other side, as argued in, the main challenges in exploratory search can be summarized as: support querying and rapid query refinement; other facets and metadata-based result filtering; leverage search context; support learning and understanding; other visualization to support insight/decision making; facilitate collaboration. In Section 3 we will show two applications for exploratory search in the Semantic Web addressing some of the above challenges.
  17. Cali, A.: Ontology querying : datalog strikes back (2017) 0.01
    0.005898641 = product of:
      0.014746603 = sum of:
        0.0100103095 = weight(_text_:a in 3928) [ClassicSimilarity], result of:
          0.0100103095 = score(doc=3928,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18723148 = fieldWeight in 3928, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3928)
        0.0047362936 = product of:
          0.009472587 = sum of:
            0.009472587 = weight(_text_:information in 3928) [ClassicSimilarity], result of:
              0.009472587 = score(doc=3928,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.116372846 = fieldWeight in 3928, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3928)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    In this tutorial we address the problem of ontology querying, that is, the problem of answering queries against a theory constituted by facts (the data) and inference rules (the ontology). A varied landscape of ontology languages exists in the scientific literature, with several degrees of complexity of query processing. We argue that Datalog±, a family of languages derived from Datalog, is a powerful tool for ontology querying. To illustrate the impact of this comeback of Datalog, we present the basic paradigms behind the main Datalog± as well as some recent extensions. We also present some efficient query processing techniques for some cases.
    Series
    Lecture Notes in Computer Scienc;10370) (Information Systems and Applications, incl. Internet/Web, and HCI
    Type
    a
  18. Willer, M.; Dunsire, G.: Bibliographic information organization in the Semantic Web (2013) 0.01
    0.005889678 = product of:
      0.014724194 = sum of:
        0.005898632 = weight(_text_:a in 2143) [ClassicSimilarity], result of:
          0.005898632 = score(doc=2143,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.11032722 = fieldWeight in 2143, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2143)
        0.008825562 = product of:
          0.017651124 = sum of:
            0.017651124 = weight(_text_:information in 2143) [ClassicSimilarity], result of:
              0.017651124 = score(doc=2143,freq=10.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.21684799 = fieldWeight in 2143, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2143)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    New technologies will underpin the future generation of library catalogues. To facilitate their role providing information, serving users, and fulfilling their mission as cultural heritage and memory institutions, libraries must take a technological leap; their standards and services must be transformed to those of the Semantic Web. Bibliographic Information Organization in the Semantic Web explores the technologies that may power future library catalogues, and argues the necessity of such a leap. The text introduces international bibliographic standards and models, and fundamental concepts in their representation in the context of the Semantic Web. Subsequent chapters cover bibliographic information organization, linked open data, methodologies for publishing library metadata, discussion of the wider environment (museum, archival and publishing communities) and users, followed by a conclusion.
    Series
    Chandos information professional series
  19. Ghorbel, H.; Bahri, A.; Bouaziz, R.: Fuzzy ontologies building platform for Semantic Web : FOB platform (2012) 0.01
    0.0058368337 = product of:
      0.014592084 = sum of:
        0.009010308 = weight(_text_:a in 98) [ClassicSimilarity], result of:
          0.009010308 = score(doc=98,freq=14.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1685276 = fieldWeight in 98, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=98)
        0.0055817757 = product of:
          0.011163551 = sum of:
            0.011163551 = weight(_text_:information in 98) [ClassicSimilarity], result of:
              0.011163551 = score(doc=98,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13714671 = fieldWeight in 98, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=98)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The unstructured design of Web resources favors human comprehension, but makes difficult the automatic exploitation of the contents of these resources by machines. So, the Semantic Web aims at making the cooperation between human and machine possible, by giving any information a well defined meaning. The first weavings of the Semantic Web are already prepared. Machines become able to treat and understand the data that were accustomed to only visualization, by using ontologies constitute an essential element of the Semantic Web, as they serve as a form of knowledge representation, sharing, and reuse. However, the Web content is subject to imperfection, and crisp ontologies become less suitable to represent concepts with imprecise definitions. To overcome this problem, fuzzy ontologies constitute a promising research orientation. Indeed, the definition of fuzzy ontologies components constitutes an issue that needs to be well treated. It is necessary to have an appropriate methodology of building an operationalization of fuzzy ontological models. This chapter defines a fuzzy ontological model based on fuzzy description logic. This model uses a new approach for the formal description of fuzzy ontologies. This new methodology shows how all the basic components defined for fuzzy ontologies can be constructed.
    Source
    Next generation search engines: advanced models for information retrieval. Eds.: C. Jouis, u.a
    Type
    a
  20. Zenz, G.; Zhou, X.; Minack, E.; Siberski, W.; Nejdl, W.: Interactive query construction for keyword search on the Semantic Web (2012) 0.01
    0.0058368337 = product of:
      0.014592084 = sum of:
        0.009010308 = weight(_text_:a in 430) [ClassicSimilarity], result of:
          0.009010308 = score(doc=430,freq=14.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1685276 = fieldWeight in 430, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=430)
        0.0055817757 = product of:
          0.011163551 = sum of:
            0.011163551 = weight(_text_:information in 430) [ClassicSimilarity], result of:
              0.011163551 = score(doc=430,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13714671 = fieldWeight in 430, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=430)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    With the advance of the semantic Web, increasing amounts of data are available in a structured and machine-understandable form. This opens opportunities for users to employ semantic queries instead of simple keyword-based ones to accurately express the information need. However, constructing semantic queries is a demanding task for human users [11]. To compose a valid semantic query, a user has to (1) master a query language (e.g., SPARQL) and (2) acquire sufficient knowledge about the ontology or the schema of the data source. While there are systems which support this task with visual tools [21, 26] or natural language interfaces [3, 13, 14, 18], the process of query construction can still be complex and time consuming. According to [24], users prefer keyword search, and struggle with the construction of semantic queries although being supported with a natural language interface. Several keyword search approaches have already been proposed to ease information seeking on semantic data [16, 32, 35] or databases [1, 31]. However, keyword queries lack the expressivity to precisely describe the user's intent. As a result, ranking can at best put query intentions of the majority on top, making it impossible to take the intentions of all users into consideration.

Types

  • a 71
  • m 32
  • el 23
  • s 11
  • x 3
  • r 1
  • More… Less…

Subjects