Search (65 results, page 1 of 4)

  • × theme_ss:"Semantic Web"
  1. Shoffner, M.; Greenberg, J.; Kramer-Duffield, J.; Woodbury, D.: Web 2.0 semantic systems : collaborative learning in science (2008) 0.03
    0.033121504 = product of:
      0.07728351 = sum of:
        0.044457134 = weight(_text_:personal in 2661) [ClassicSimilarity], result of:
          0.044457134 = score(doc=2661,freq=2.0), product of:
            0.19948503 = queryWeight, product of:
              5.0427346 = idf(docFreq=775, maxDocs=44218)
              0.0395589 = queryNorm
            0.22285949 = fieldWeight in 2661, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.0427346 = idf(docFreq=775, maxDocs=44218)
              0.03125 = fieldNorm(doc=2661)
        0.022107007 = weight(_text_:ed in 2661) [ClassicSimilarity], result of:
          0.022107007 = score(doc=2661,freq=2.0), product of:
            0.140671 = queryWeight, product of:
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.0395589 = queryNorm
            0.15715398 = fieldWeight in 2661, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.03125 = fieldNorm(doc=2661)
        0.010719369 = product of:
          0.021438738 = sum of:
            0.021438738 = weight(_text_:22 in 2661) [ClassicSimilarity], result of:
              0.021438738 = score(doc=2661,freq=2.0), product of:
                0.13852853 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0395589 = queryNorm
                0.15476047 = fieldWeight in 2661, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2661)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    The basic goal of education within a discipline is to transform a novice into an expert. This entails moving the novice toward the "semantic space" that the expert inhabits-the space of concepts, meanings, vocabularies, and other intellectual constructs that comprise the discipline. Metadata is significant to this goal in digitally mediated education environments. Encoding the experts' semantic space not only enables the sharing of semantics among discipline scientists, but also creates an environment that bridges the semantic gap between the common vocabulary of the novice and the granular descriptive language of the seasoned scientist (Greenberg, et al, 2005). Developments underlying the Semantic Web, where vocabularies are formalized in the Web Ontology Language (OWL), and Web 2.0 approaches of user-generated folksonomies provide an infrastructure for linking vocabulary systems and promoting group learning via metadata literacy. Group learning is a pedagogical approach to teaching that harnesses the phenomenon of "collective intelligence" to increase learning by means of collaboration. Learning a new semantic system can be daunting for a novice, and yet it is integral to advance one's knowledge in a discipline and retain interest. These ideas are key to the "BOT 2.0: Botany through Web 2.0, the Memex and Social Learning" project (Bot 2.0).72 Bot 2.0 is a collaboration involving the North Carolina Botanical Garden, the UNC SILS Metadata Research center, and the Renaissance Computing Institute (RENCI). Bot 2.0 presents a curriculum utilizing a memex as a way for students to link and share digital information, working asynchronously in an environment beyond the traditional classroom. Our conception of a memex is not a centralized black box but rather a flexible, distributed framework that uses the most salient and easiest-to-use collaborative platforms (e.g., Facebook, Flickr, wiki and blog technology) for personal information management. By meeting students "where they live" digitally, we hope to attract students to the study of botanical science. A key aspect is to teach students scientific terminology and about the value of metadata, an inherent function in several of the technologies and in the instructional approach we are utilizing. This poster will report on a study examining the value of both folksonomies and taxonomies for post-secondary college students learning plant identification. Our data is drawn from a curriculum involving a virtual independent learning portion and a "BotCamp" weekend at UNC, where students work with digital plan specimens that they have captured. Results provide some insight into the importance of collaboration and shared vocabulary for gaining confidence and for student progression from novice to expert in botany.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  2. Malmsten, M.: Making a library catalogue part of the Semantic Web (2008) 0.02
    0.01641319 = product of:
      0.05744616 = sum of:
        0.038687263 = weight(_text_:ed in 2640) [ClassicSimilarity], result of:
          0.038687263 = score(doc=2640,freq=2.0), product of:
            0.140671 = queryWeight, product of:
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.0395589 = queryNorm
            0.27501947 = fieldWeight in 2640, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2640)
        0.018758897 = product of:
          0.037517793 = sum of:
            0.037517793 = weight(_text_:22 in 2640) [ClassicSimilarity], result of:
              0.037517793 = score(doc=2640,freq=2.0), product of:
                0.13852853 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0395589 = queryNorm
                0.2708308 = fieldWeight in 2640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2640)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  3. Zhitomirsky-Geffet, M.; Bar-Ilan, J.: Towards maximal unification of semantically diverse ontologies for controversial domains (2014) 0.02
    0.015764717 = product of:
      0.055176504 = sum of:
        0.044457134 = weight(_text_:personal in 1634) [ClassicSimilarity], result of:
          0.044457134 = score(doc=1634,freq=2.0), product of:
            0.19948503 = queryWeight, product of:
              5.0427346 = idf(docFreq=775, maxDocs=44218)
              0.0395589 = queryNorm
            0.22285949 = fieldWeight in 1634, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.0427346 = idf(docFreq=775, maxDocs=44218)
              0.03125 = fieldNorm(doc=1634)
        0.010719369 = product of:
          0.021438738 = sum of:
            0.021438738 = weight(_text_:22 in 1634) [ClassicSimilarity], result of:
              0.021438738 = score(doc=1634,freq=2.0), product of:
                0.13852853 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0395589 = queryNorm
                0.15476047 = fieldWeight in 1634, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1634)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Purpose - Ontologies are prone to wide semantic variability due to subjective points of view of their composers. The purpose of this paper is to propose a new approach for maximal unification of diverse ontologies for controversial domains by their relations. Design/methodology/approach - Effective matching or unification of multiple ontologies for a specific domain is crucial for the success of many semantic web applications, such as semantic information retrieval and organization, document tagging, summarization and search. To this end, numerous automatic and semi-automatic techniques were proposed in the past decade that attempt to identify similar entities, mostly classes, in diverse ontologies for similar domains. Apparently, matching individual entities cannot result in full integration of ontologies' semantics without matching their inter-relations with all other-related classes (and instances). However, semantic matching of ontological relations still constitutes a major research challenge. Therefore, in this paper the authors propose a new paradigm for assessment of maximal possible matching and unification of ontological relations. To this end, several unification rules for ontological relations were devised based on ontological reference rules, and lexical and textual entailment. These rules were semi-automatically implemented to extend a given ontology with semantically matching relations from another ontology for a similar domain. Then, the ontologies were unified through these similar pairs of relations. The authors observe that these rules can be also facilitated to reveal the contradictory relations in different ontologies. Findings - To assess the feasibility of the approach two experiments were conducted with different sets of multiple personal ontologies on controversial domains constructed by trained subjects. The results for about 50 distinct ontology pairs demonstrate a good potential of the methodology for increasing inter-ontology agreement. Furthermore, the authors show that the presented methodology can lead to a complete unification of multiple semantically heterogeneous ontologies. Research limitations/implications - This is a conceptual study that presents a new approach for semantic unification of ontologies by a devised set of rules along with the initial experimental evidence of its feasibility and effectiveness. However, this methodology has to be fully automatically implemented and tested on a larger dataset in future research. Practical implications - This result has implication for semantic search, since a richer ontology, comprised of multiple aspects and viewpoints of the domain of knowledge, enhances discoverability and improves search results. Originality/value - To the best of the knowledge, this is the first study to examine and assess the maximal level of semantic relation-based ontology unification.
    Date
    20. 1.2015 18:30:22
  4. Weibel, S.L.: Social Bibliography : a personal perspective on libraries and the Semantic Web (2006) 0.02
    0.01571797 = product of:
      0.110025786 = sum of:
        0.110025786 = weight(_text_:personal in 250) [ClassicSimilarity], result of:
          0.110025786 = score(doc=250,freq=4.0), product of:
            0.19948503 = queryWeight, product of:
              5.0427346 = idf(docFreq=775, maxDocs=44218)
              0.0395589 = queryNorm
            0.5515491 = fieldWeight in 250, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.0427346 = idf(docFreq=775, maxDocs=44218)
              0.0546875 = fieldNorm(doc=250)
      0.14285715 = coord(1/7)
    
    Abstract
    This paper presents a personal perspective on libraries and the Semantic Web. The paper discusses computing power, increased availability of processable text, social software developments and the ideas underlying Web 2.0 and the impact of these developments in the context of libraries and information. The article concludes with a discussion of social bibliography and the declining hegemony of catalog records, and emphasizes the strengths of librarianship and the profession's ability to contribute to Semantic Web development.
  5. Tillett, B.B.: AACR2 and metadata : library opportunities in the global semantic Web (2003) 0.01
    0.013257488 = product of:
      0.09280241 = sum of:
        0.09280241 = weight(_text_:global in 5510) [ClassicSimilarity], result of:
          0.09280241 = score(doc=5510,freq=4.0), product of:
            0.19788647 = queryWeight, product of:
              5.002325 = idf(docFreq=807, maxDocs=44218)
              0.0395589 = queryNorm
            0.46896797 = fieldWeight in 5510, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.002325 = idf(docFreq=807, maxDocs=44218)
              0.046875 = fieldNorm(doc=5510)
      0.14285715 = coord(1/7)
    
    Abstract
    Explores the opportunities for libraries to contribute to the proposed global "Semantic Web." Library name and subject authority files, including work that IFLA has done related to a new view of "Universal Bibliographic Control" in the Internet environment and the work underway in the U.S. and Europe, are making a reality of the virtual international authority file on the Web. The bibliographic and authority records created according to AACR2 reflect standards for metadata that libraries have provided for years. New opportunities for using these records in the digital world are described (interoperability), including mapping with Dublin Core metadata. AACR2 recently updated Chapter 9 on Electronic Resources. That process and highlights of the changes are described, including Library of Congress' rule interpretations.
  6. Resource Description Framework (RDF) (2004) 0.01
    0.0127020385 = product of:
      0.08891427 = sum of:
        0.08891427 = weight(_text_:personal in 3063) [ClassicSimilarity], result of:
          0.08891427 = score(doc=3063,freq=2.0), product of:
            0.19948503 = queryWeight, product of:
              5.0427346 = idf(docFreq=775, maxDocs=44218)
              0.0395589 = queryNorm
            0.44571897 = fieldWeight in 3063, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.0427346 = idf(docFreq=775, maxDocs=44218)
              0.0625 = fieldNorm(doc=3063)
      0.14285715 = coord(1/7)
    
    Abstract
    The Resource Description Framework (RDF) integrates a variety of applications from library catalogs and world-wide directories to syndication and aggregation of news, software, and content to personal collections of music, photos, and events using XML as an interchange syntax. The RDF specifications provide a lightweight ontology system to support the exchange of knowledge on the Web. The W3C Semantic Web Activity Statement explains W3C's plans for RDF, including the RDF Core WG, Web Ontology and the RDF Interest Group.
  7. Davies, J.; Fensel, D.; Harmelen, F. van: Conclusions: ontology-driven knowledge management : towards the Semantic Web? (2004) 0.01
    0.012499279 = product of:
      0.087494954 = sum of:
        0.087494954 = weight(_text_:global in 4407) [ClassicSimilarity], result of:
          0.087494954 = score(doc=4407,freq=2.0), product of:
            0.19788647 = queryWeight, product of:
              5.002325 = idf(docFreq=807, maxDocs=44218)
              0.0395589 = queryNorm
            0.44214723 = fieldWeight in 4407, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.002325 = idf(docFreq=807, maxDocs=44218)
              0.0625 = fieldNorm(doc=4407)
      0.14285715 = coord(1/7)
    
    Abstract
    The global economy is rapidly becoming more and more knowledge intensive. Knowledge is now widely recognized as the fourth production factor, on an equal footing with the traditional production factors of labour, capital and materials. Managing knowledge is as important as the traditional management of labour, capital and materials. In this book, we have shown how Semantic Web technology can make an important contribution to knowledge management.
  8. Brambilla, M.; Ceri, S.: Designing exploratory search applications upon Web data sources (2012) 0.01
    0.011952057 = product of:
      0.083664395 = sum of:
        0.083664395 = weight(_text_:brain in 428) [ClassicSimilarity], result of:
          0.083664395 = score(doc=428,freq=2.0), product of:
            0.2736591 = queryWeight, product of:
              6.9177637 = idf(docFreq=118, maxDocs=44218)
              0.0395589 = queryNorm
            0.30572486 = fieldWeight in 428, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.9177637 = idf(docFreq=118, maxDocs=44218)
              0.03125 = fieldNorm(doc=428)
      0.14285715 = coord(1/7)
    
    Abstract
    Search is the preferred method to access information in today's computing systems. The Web, accessed through search engines, is universally recognized as the source for answering users' information needs. However, offering a link to a Web page does not cover all information needs. Even simple problems, such as "Which theater offers an at least three-stars action movie in London close to a good Italian restaurant," can only be solved by searching the Web multiple times, e.g., by extracting a list of the recent action movies filtered by ranking, then looking for movie theaters, then looking for Italian restaurants close to them. While search engines hint to useful information, the user's brain is the fundamental platform for information integration. An important trend is the availability of new, specialized data sources-the so-called "long tail" of the Web of data. Such carefully collected and curated data sources can be much more valuable than information currently available in Web pages; however, many sources remain hidden or insulated, in the lack of software solutions for bringing them to surface and making them usable in the search context. A new class of tailor-made systems, designed to satisfy the needs of users with specific aims, will support the publishing and integration of data sources for vertical domains; the user will be able to select sources based on individual or collective trust, and systems will be able to route queries to such sources and to provide easyto-use interfaces for combining them within search strategies, at the same time, rewarding the data source owners for each contribution to effective search. Efforts such as Google's Fusion Tables show that the technology for bringing hidden data sources to surface is feasible.
  9. Ning, X.; Jin, H.; Wu, H.: RSS: a framework enabling ranked search on the semantic web (2008) 0.01
    0.011047906 = product of:
      0.07733534 = sum of:
        0.07733534 = weight(_text_:global in 2069) [ClassicSimilarity], result of:
          0.07733534 = score(doc=2069,freq=4.0), product of:
            0.19788647 = queryWeight, product of:
              5.002325 = idf(docFreq=807, maxDocs=44218)
              0.0395589 = queryNorm
            0.39080665 = fieldWeight in 2069, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.002325 = idf(docFreq=807, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2069)
      0.14285715 = coord(1/7)
    
    Abstract
    The semantic web not only contains resources but also includes the heterogeneous relationships among them, which is sharply distinguished from the current web. As the growth of the semantic web, specialized search techniques are of significance. In this paper, we present RSS-a framework for enabling ranked semantic search on the semantic web. In this framework, the heterogeneity of relationships is fully exploited to determine the global importance of resources. In addition, the search results can be greatly expanded with entities most semantically related to the query, thus able to provide users with properly ordered semantic search results by combining global ranking values and the relevance between the resources and the query. The proposed semantic search model which supports inference is very different from traditional keyword-based search methods. Moreover, RSS also distinguishes from many current methods of accessing the semantic web data in that it applies novel ranking strategies to prevent returning search results in disorder. The experimental results show that the framework is feasible and can produce better ordering of semantic search results than directly applying the standard PageRank algorithm on the semantic web.
  10. San Segundo, R.; Ávila, D.M.: New conceptual structures for the digital environment : from KOS to the semantic interconnection (2012) 0.01
    0.011047906 = product of:
      0.07733534 = sum of:
        0.07733534 = weight(_text_:global in 850) [ClassicSimilarity], result of:
          0.07733534 = score(doc=850,freq=4.0), product of:
            0.19788647 = queryWeight, product of:
              5.002325 = idf(docFreq=807, maxDocs=44218)
              0.0395589 = queryNorm
            0.39080665 = fieldWeight in 850, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.002325 = idf(docFreq=807, maxDocs=44218)
              0.0390625 = fieldNorm(doc=850)
      0.14285715 = coord(1/7)
    
    Abstract
    Primitive thinking forms affected the organization of knowledge, and at a later date writing also affected organization. Currently, the web requires new forms of learning and knowledge; with the globalization of information, connectivity and virtuality have a bearing on human thought. Digital thinking is shaping our reality and its organizational form. Natural memory, considered to be a process that requires the structure of natural language and human capabilities, is interwoven with a subject and a conscience; memory preserved through writing required other tools to assist it, and classifications, cataloguing, organization or other KOS were created. The new tool for recovering digital memory is the semantic web. This points to information's future on the Internet and seems to approach the utopia of global, organized information and attempts to give the website greater significance. The Web 3.0 incorporates a proliferation of languages, concepts and tools that are difficult to govern and are created by users. The semantic web seems to be a natural evolution of the participative web in which we find ourselves, and if an effective combination is achieved between the inclusion of semantic content in web pages and the use of artificial intelligence it will be a revolution; semantic codification will be a fact when it is totally automated. Based on this, a collective digital intelligence is being constituted. We find ourselves before intelligent multitudes with broad access to enormous amounts of information. The intelligent multitude emerges when technologies interconnect. In this global interconnection of semantic information an exponential pattern of technological growth can take place.
  11. Bizer, C.; Cyganiak, R.; Heath, T.: How to publish Linked Data on the Web (2007) 0.01
    0.010936869 = product of:
      0.07655808 = sum of:
        0.07655808 = weight(_text_:global in 3791) [ClassicSimilarity], result of:
          0.07655808 = score(doc=3791,freq=2.0), product of:
            0.19788647 = queryWeight, product of:
              5.002325 = idf(docFreq=807, maxDocs=44218)
              0.0395589 = queryNorm
            0.38687882 = fieldWeight in 3791, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.002325 = idf(docFreq=807, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3791)
      0.14285715 = coord(1/7)
    
    Content
    This tutorial has been superseeded by the book Linked Data: Evolving the Web into a Global Data Space written by Tom Heath and Christian Bizer. This tutorial was published in 2007 and is still online for historical reasons. The Linked Data book was published in 2011 and provides a more detailed and up-to-date introduction into Linked Data.
  12. Bizer, C.; Heath, T.: Linked Data : evolving the web into a global data space (2011) 0.01
    0.010824694 = product of:
      0.07577286 = sum of:
        0.07577286 = weight(_text_:global in 4725) [ClassicSimilarity], result of:
          0.07577286 = score(doc=4725,freq=6.0), product of:
            0.19788647 = queryWeight, product of:
              5.002325 = idf(docFreq=807, maxDocs=44218)
              0.0395589 = queryNorm
            0.38291076 = fieldWeight in 4725, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.002325 = idf(docFreq=807, maxDocs=44218)
              0.03125 = fieldNorm(doc=4725)
      0.14285715 = coord(1/7)
    
    Abstract
    The World Wide Web has enabled the creation of a global information space comprising linked documents. As the Web becomes ever more enmeshed with our daily lives, there is a growing desire for direct access to raw data not currently available on the Web or bound up in hypertext documents. Linked Data provides a publishing paradigm in which not only documents, but also data, can be a first class citizen of the Web, thereby enabling the extension of the Web with a global data space based on open standards - the Web of Data. In this Synthesis lecture we provide readers with a detailed technical introduction to Linked Data. We begin by outlining the basic principles of Linked Data, including coverage of relevant aspects of Web architecture. The remainder of the text is based around two main themes - the publication and consumption of Linked Data. Drawing on a practical Linked Data scenario, we provide guidance and best practices on: architectural approaches to publishing Linked Data; choosing URIs and vocabularies to identify and describe resources; deciding what data to return in a description of a resource on the Web; methods and frameworks for automated linking of data sets; and testing and debugging approaches for Linked Data deployments. We give an overview of existing Linked Data applications and then examine the architectures that are used to consume Linked Data from the Web, alongside existing tools and frameworks that enable these. Readers can expect to gain a rich technical understanding of Linked Data fundamentals, as the basis for application development, research or further study.
  13. Zeng, M.L.; Fan, W.; Lin, X.: SKOS for an integrated vocabulary structure (2008) 0.01
    0.010647568 = product of:
      0.037266485 = sum of:
        0.022107007 = weight(_text_:ed in 2654) [ClassicSimilarity], result of:
          0.022107007 = score(doc=2654,freq=2.0), product of:
            0.140671 = queryWeight, product of:
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.0395589 = queryNorm
            0.15715398 = fieldWeight in 2654, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.03125 = fieldNorm(doc=2654)
        0.015159478 = product of:
          0.030318957 = sum of:
            0.030318957 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
              0.030318957 = score(doc=2654,freq=4.0), product of:
                0.13852853 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0395589 = queryNorm
                0.21886435 = fieldWeight in 2654, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2654)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    In order to transfer the Chinese Classified Thesaurus (CCT) into a machine-processable format and provide CCT-based Web services, a pilot study has been conducted in which a variety of selected CCT classes and mapped thesaurus entries are encoded with SKOS. OWL and RDFS are also used to encode the same contents for the purposes of feasibility and cost-benefit comparison. CCT is a collected effort led by the National Library of China. It is an integration of the national standards Chinese Library Classification (CLC) 4th edition and Chinese Thesaurus (CT). As a manually created mapping product, CCT provides for each of the classes the corresponding thesaurus terms, and vice versa. The coverage of CCT includes four major clusters: philosophy, social sciences and humanities, natural sciences and technologies, and general works. There are 22 main-classes, 52,992 sub-classes and divisions, 110,837 preferred thesaurus terms, 35,690 entry terms (non-preferred terms), and 59,738 pre-coordinated headings (Chinese Classified Thesaurus, 2005) Major challenges of encoding this large vocabulary comes from its integrated structure. CCT is a result of the combination of two structures (illustrated in Figure 1): a thesaurus that uses ISO-2788 standardized structure and a classification scheme that is basically enumerative, but provides some flexibility for several kinds of synthetic mechanisms Other challenges include the complex relationships caused by differences of granularities of two original schemes and their presentation with various levels of SKOS elements; as well as the diverse coordination of entries due to the use of auxiliary tables and pre-coordinated headings derived from combining classes, subdivisions, and thesaurus terms, which do not correspond to existing unique identifiers. The poster reports the progress, shares the sample SKOS entries, and summarizes problems identified during the SKOS encoding process. Although OWL Lite and OWL Full provide richer expressiveness, the cost-benefit issues and the final purposes of encoding CCT raise questions of using such approaches.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  14. Fluit, C.; Horst, H. ter; Meer, J. van der; Sabou, M.; Mika, P.: Spectacle (2004) 0.01
    0.009526528 = product of:
      0.0666857 = sum of:
        0.0666857 = weight(_text_:personal in 4337) [ClassicSimilarity], result of:
          0.0666857 = score(doc=4337,freq=2.0), product of:
            0.19948503 = queryWeight, product of:
              5.0427346 = idf(docFreq=775, maxDocs=44218)
              0.0395589 = queryNorm
            0.33428922 = fieldWeight in 4337, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.0427346 = idf(docFreq=775, maxDocs=44218)
              0.046875 = fieldNorm(doc=4337)
      0.14285715 = coord(1/7)
    
    Abstract
    Many Semantic Web initiatives improve the capabilities of machines to exchange the meaning of information with other machines. These efforts lead to an increased quality of the application's results, but their user interfaces take little or no advantage of the semantic richness. For example, an ontology-based search engine will use its ontology when evaluating the user's query (e.g. for query formulation, disambiguation or evaluation), but fails to use it to significantly enrich the presentation of the results to a human user. For example, one could imagine replacing the endless list of hits with a structured presentation based on the semantic properties of the hits. Another problem is that the modelling of a domain is done from a single perspective (most often that of the information provider). Therefore, presentation based on the resulting ontology is unlikely to satisfy the needs of all the different types of users of the information. So even assuming an ontology for the domain is in place, mapping that ontology to the needs of individual users - based on their tasks, expertise and personal preferences - is not trivial.
  15. Subirats, I.; Prasad, A.R.D.; Keizer, J.; Bagdanov, A.: Implementation of rich metadata formats and demantic tools using DSpace (2008) 0.01
    0.009378965 = product of:
      0.032826375 = sum of:
        0.022107007 = weight(_text_:ed in 2656) [ClassicSimilarity], result of:
          0.022107007 = score(doc=2656,freq=2.0), product of:
            0.140671 = queryWeight, product of:
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.0395589 = queryNorm
            0.15715398 = fieldWeight in 2656, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.03125 = fieldNorm(doc=2656)
        0.010719369 = product of:
          0.021438738 = sum of:
            0.021438738 = weight(_text_:22 in 2656) [ClassicSimilarity], result of:
              0.021438738 = score(doc=2656,freq=2.0), product of:
                0.13852853 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0395589 = queryNorm
                0.15476047 = fieldWeight in 2656, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2656)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  16. Shaw, R.; Buckland, M.: Open identification and linking of the four Ws (2008) 0.01
    0.008206595 = product of:
      0.02872308 = sum of:
        0.019343631 = weight(_text_:ed in 2665) [ClassicSimilarity], result of:
          0.019343631 = score(doc=2665,freq=2.0), product of:
            0.140671 = queryWeight, product of:
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.0395589 = queryNorm
            0.13750973 = fieldWeight in 2665, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2665)
        0.009379448 = product of:
          0.018758897 = sum of:
            0.018758897 = weight(_text_:22 in 2665) [ClassicSimilarity], result of:
              0.018758897 = score(doc=2665,freq=2.0), product of:
                0.13852853 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0395589 = queryNorm
                0.1354154 = fieldWeight in 2665, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2665)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  17. O'Hara, K.; Hall, W.: Semantic Web (2009) 0.01
    0.007816008 = product of:
      0.054712057 = sum of:
        0.054712057 = weight(_text_:ed in 3871) [ClassicSimilarity], result of:
          0.054712057 = score(doc=3871,freq=4.0), product of:
            0.140671 = queryWeight, product of:
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.0395589 = queryNorm
            0.38893628 = fieldWeight in 3871, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3871)
      0.14285715 = coord(1/7)
    
    Source
    Encyclopedia of library and information sciences. 3rd ed. Ed.: M.J. Bates
  18. Gibbins, N.; Shadbolt, N.: Resource Description Framework (RDF) (2009) 0.01
    0.007816008 = product of:
      0.054712057 = sum of:
        0.054712057 = weight(_text_:ed in 4695) [ClassicSimilarity], result of:
          0.054712057 = score(doc=4695,freq=4.0), product of:
            0.140671 = queryWeight, product of:
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.0395589 = queryNorm
            0.38893628 = fieldWeight in 4695, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4695)
      0.14285715 = coord(1/7)
    
    Source
    Encyclopedia of library and information sciences. 3rd ed. Ed.: M.J. Bates
  19. Lassalle, E.; Lassalle, E.: Semantic models in information retrieval (2012) 0.01
    0.0078120497 = product of:
      0.054684345 = sum of:
        0.054684345 = weight(_text_:global in 97) [ClassicSimilarity], result of:
          0.054684345 = score(doc=97,freq=2.0), product of:
            0.19788647 = queryWeight, product of:
              5.002325 = idf(docFreq=807, maxDocs=44218)
              0.0395589 = queryNorm
            0.276342 = fieldWeight in 97, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.002325 = idf(docFreq=807, maxDocs=44218)
              0.0390625 = fieldNorm(doc=97)
      0.14285715 = coord(1/7)
    
    Abstract
    Robertson and Spärck Jones pioneered experimental probabilistic models (Binary Independence Model) with both a typology generalizing the Boolean model, a frequency counting to calculate elementary weightings, and their combination into a global probabilistic estimation. However, this model did not consider indexing terms dependencies. An extension to mixture models (e.g., using a 2-Poisson law) made it possible to take into account these dependencies from a macroscopic point of view (BM25), as well as a shallow linguistic processing of co-references. New approaches (language models, for example "bag of words" models, probabilistic dependencies between requests and documents, and consequently Bayesian inference using Dirichlet prior conjugate) furnished new solutions for documents structuring (categorization) and for index smoothing. Presently, in these probabilistic models the main issues have been addressed from a formal point of view only. Thus, linguistic properties are neglected in the indexing language. The authors examine how a linguistic and semantic modeling can be integrated in indexing languages and set up a hybrid model that makes it possible to deal with different information retrieval problems in a unified way.
  20. Virgilio, R. De; Cappellari, P.; Maccioni, A.; Torlone, R.: Path-oriented keyword search query over RDF (2012) 0.01
    0.0078120497 = product of:
      0.054684345 = sum of:
        0.054684345 = weight(_text_:global in 429) [ClassicSimilarity], result of:
          0.054684345 = score(doc=429,freq=2.0), product of:
            0.19788647 = queryWeight, product of:
              5.002325 = idf(docFreq=807, maxDocs=44218)
              0.0395589 = queryNorm
            0.276342 = fieldWeight in 429, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.002325 = idf(docFreq=807, maxDocs=44218)
              0.0390625 = fieldNorm(doc=429)
      0.14285715 = coord(1/7)
    
    Abstract
    We are witnessing a smooth evolution of the Web from a worldwide information space of linked documents to a global knowledge base, where resources are identified by means of uniform resource identifiers (URIs, essentially string identifiers) and are semantically described and correlated through resource description framework (RDF, a metadata data model) statements. With the size and availability of data constantly increasing (currently around 7 billion RDF triples and 150 million RDF links), a fundamental problem lies in the difficulty users face to find and retrieve the information they are interested in. In general, to access semantic data, users need to know the organization of data and the syntax of a specific query language (e.g., SPARQL or variants thereof). Clearly, this represents an obstacle to information access for nonexpert users. For this reason, keyword search-based systems are increasingly capturing the attention of researchers. Recently, many approaches to keyword-based search over structured and semistructured data have been proposed]. These approaches usually implement IR strategies on top of traditional database management systems with the goal of freeing the users from having to know data organization and query languages.

Authors

Languages

  • e 56
  • d 9

Types

  • a 40
  • el 17
  • m 12
  • s 5
  • n 1
  • x 1
  • More… Less…