Search (46 results, page 1 of 3)

  • × theme_ss:"Semantic Web"
  1. Shoffner, M.; Greenberg, J.; Kramer-Duffield, J.; Woodbury, D.: Web 2.0 semantic systems : collaborative learning in science (2008) 0.04
    0.036574703 = product of:
      0.09143676 = sum of:
        0.06385853 = weight(_text_:link in 2661) [ClassicSimilarity], result of:
          0.06385853 = score(doc=2661,freq=2.0), product of:
            0.2711644 = queryWeight, product of:
              5.3287 = idf(docFreq=582, maxDocs=44218)
              0.05088753 = queryNorm
            0.23549749 = fieldWeight in 2661, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.3287 = idf(docFreq=582, maxDocs=44218)
              0.03125 = fieldNorm(doc=2661)
        0.027578231 = weight(_text_:22 in 2661) [ClassicSimilarity], result of:
          0.027578231 = score(doc=2661,freq=2.0), product of:
            0.17819946 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.05088753 = queryNorm
            0.15476047 = fieldWeight in 2661, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03125 = fieldNorm(doc=2661)
      0.4 = coord(2/5)
    
    Abstract
    The basic goal of education within a discipline is to transform a novice into an expert. This entails moving the novice toward the "semantic space" that the expert inhabits-the space of concepts, meanings, vocabularies, and other intellectual constructs that comprise the discipline. Metadata is significant to this goal in digitally mediated education environments. Encoding the experts' semantic space not only enables the sharing of semantics among discipline scientists, but also creates an environment that bridges the semantic gap between the common vocabulary of the novice and the granular descriptive language of the seasoned scientist (Greenberg, et al, 2005). Developments underlying the Semantic Web, where vocabularies are formalized in the Web Ontology Language (OWL), and Web 2.0 approaches of user-generated folksonomies provide an infrastructure for linking vocabulary systems and promoting group learning via metadata literacy. Group learning is a pedagogical approach to teaching that harnesses the phenomenon of "collective intelligence" to increase learning by means of collaboration. Learning a new semantic system can be daunting for a novice, and yet it is integral to advance one's knowledge in a discipline and retain interest. These ideas are key to the "BOT 2.0: Botany through Web 2.0, the Memex and Social Learning" project (Bot 2.0).72 Bot 2.0 is a collaboration involving the North Carolina Botanical Garden, the UNC SILS Metadata Research center, and the Renaissance Computing Institute (RENCI). Bot 2.0 presents a curriculum utilizing a memex as a way for students to link and share digital information, working asynchronously in an environment beyond the traditional classroom. Our conception of a memex is not a centralized black box but rather a flexible, distributed framework that uses the most salient and easiest-to-use collaborative platforms (e.g., Facebook, Flickr, wiki and blog technology) for personal information management. By meeting students "where they live" digitally, we hope to attract students to the study of botanical science. A key aspect is to teach students scientific terminology and about the value of metadata, an inherent function in several of the technologies and in the instructional approach we are utilizing. This poster will report on a study examining the value of both folksonomies and taxonomies for post-secondary college students learning plant identification. Our data is drawn from a curriculum involving a virtual independent learning portion and a "BotCamp" weekend at UNC, where students work with digital plan specimens that they have captured. Results provide some insight into the importance of collaboration and shared vocabulary for gaining confidence and for student progression from novice to expert in botany.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  2. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.03
    0.032329153 = product of:
      0.16164577 = sum of:
        0.16164577 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
          0.16164577 = score(doc=701,freq=2.0), product of:
            0.43142503 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.05088753 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.2 = coord(1/5)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  3. Shaw, R.; Buckland, M.: Open identification and linking of the four Ws (2008) 0.03
    0.032002866 = product of:
      0.080007166 = sum of:
        0.055876218 = weight(_text_:link in 2665) [ClassicSimilarity], result of:
          0.055876218 = score(doc=2665,freq=2.0), product of:
            0.2711644 = queryWeight, product of:
              5.3287 = idf(docFreq=582, maxDocs=44218)
              0.05088753 = queryNorm
            0.2060603 = fieldWeight in 2665, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.3287 = idf(docFreq=582, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2665)
        0.024130952 = weight(_text_:22 in 2665) [ClassicSimilarity], result of:
          0.024130952 = score(doc=2665,freq=2.0), product of:
            0.17819946 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.05088753 = queryNorm
            0.1354154 = fieldWeight in 2665, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2665)
      0.4 = coord(2/5)
    
    Abstract
    Platforms for social computing connect users via shared references to people with whom they have relationships, events attended, places lived in or traveled to, and topics such as favorite books or movies. Since free text is insufficient for expressing such references precisely and unambiguously, many social computing platforms coin identifiers for topics, places, events, and people and provide interfaces for finding and selecting these identifiers from controlled lists. Using these interfaces, users collaboratively construct a web of links among entities. This model needn't be limited to social networking sites. Understanding an item in a digital library or museum requires context: information about the topics, places, events, and people to which the item is related. Students, journalists and investigators traditionally discover this kind of context by asking "the four Ws": what, where, when and who. The DCMI Kernel Metadata Community has recognized the four Ws as fundamental elements of descriptions (Kunze & Turner, 2007). Making better use of metadata to answer these questions via links to appropriate contextual resources has been our focus in a series of research projects over the past few years. Currently we are building a system for enabling readers of any text to relate any topic, place, event or person mentioned in the text to the best explanatory resources available. This system is being developed with two different corpora: a diverse variety of biographical texts characterized by very rich and dense mentions of people, events, places and activities, and a large collection of newly-scanned books, journals and manuscripts relating to Irish culture and history. Like a social computing platform, our system consists of tools for referring to topics, places, events or people, disambiguating these references by linking them to unique identifiers, and using the disambiguated references to provide useful information in context and to link to related resources. Yet current social computing platforms, while usually amenable to importing and exporting data, tend to mint proprietary identifiers and expect links to be traversed using their own interfaces. We take a different approach, using identifiers from both established and emerging naming authorities, representing relationships using standardized metadata vocabularies, and publishing those representations using standard protocols so that links can be stored and traversed anywhere. Central to our strategy is to move from appearances in a text to naming authorities to the the construction of links for searching or querying trusted resources. Using identifiers from naming authorities, rather than literal values (as in the DCMI Kernel) or keys from a proprietary database, makes it more likely that links constructed using our system will continue to be useful in the future. WorldCat Identities URIs (http://worldcat.org/identities/) linked to Library of Congress and Deutsche Nationalbibliothek authority files for persons and organizations and Geonames (http://geonames.org/) URIs for places are stable identifiers attached to a wealth of useful metadata. Yet no naming authority can be totally comprehensive, so our system can be extended to use new sources of identifiers as needed. For example, we are experimenting with using Freebase (http://freebase.com/) URIs to identify historical events, for which no established naming authority currently exists. Stable identifiers (URIs), standardized hyperlinked data formats (XML), and uniform publishing protocols (HTTP) are key ingredients of the web's open architecture. Our system provides an example of how this open architecture can be exploited to build flexible and useful tools for connecting resources via shared references to topics, places, events, and people.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  4. Krause, J.: Shell Model, Semantic Web and Web Information Retrieval (2006) 0.03
    0.027651558 = product of:
      0.13825779 = sum of:
        0.13825779 = weight(_text_:link in 6061) [ClassicSimilarity], result of:
          0.13825779 = score(doc=6061,freq=6.0), product of:
            0.2711644 = queryWeight, product of:
              5.3287 = idf(docFreq=582, maxDocs=44218)
              0.05088753 = queryNorm
            0.5098671 = fieldWeight in 6061, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.3287 = idf(docFreq=582, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6061)
      0.2 = coord(1/5)
    
    Abstract
    The middle of the 1990s are coined by the increased enthusiasm for the possibilities of the WWW, which has only recently deviated - at least in relation to scientific information - for the differentiated measuring of its advantages and disadvantages. Web Information Retrieval originated as a specialized discipline with great commercial significance (for an overview see Lewandowski 2005). Besides the new technological structure that enables the indexing and searching (in seconds) of unimaginable amounts of data worldwide, new assessment processes for the ranking of search results are being developed, which use the link structures of the Web. They are the main innovation with respect to the traditional "mother discipline" of Information Retrieval. From the beginning, link structures of Web pages are applied to commercial search engines in a wide array of variations. From the perspective of scientific information, link topology based approaches were in essence trying to solve a self-created problem: on the one hand, it quickly became clear that the openness of the Web led to an up-tonow unknown increase in available information, but this also caused the quality of the Web pages searched to become a problem - and with it the relevance of the results. The gatekeeper function of traditional information providers, which narrows down every user query to focus on high-quality sources was lacking. Therefore, the recognition of the "authoritativeness" of the Web pages by general search engines such as Google was one of the most important factors for their success.
  5. Breslin, J.G.: Social semantic information spaces (2009) 0.03
    0.027651558 = product of:
      0.13825779 = sum of:
        0.13825779 = weight(_text_:link in 3377) [ClassicSimilarity], result of:
          0.13825779 = score(doc=3377,freq=6.0), product of:
            0.2711644 = queryWeight, product of:
              5.3287 = idf(docFreq=582, maxDocs=44218)
              0.05088753 = queryNorm
            0.5098671 = fieldWeight in 3377, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.3287 = idf(docFreq=582, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3377)
      0.2 = coord(1/5)
    
    Abstract
    The structural and syntactic web put in place in the early 90s is still much the same as what we use today: resources (web pages, files, etc.) connected by untyped hyperlinks. By untyped, we mean that there is no easy way for a computer to figure out what a link between two pages means - for example, on the W3C website, there are hundreds of links to the various organisations that are registered members of the association, but there is nothing explicitly saying that the link is to an organisation that is a "member of" the W3C or what type of organisation is represented by the link. On John's work page, he links to many papers he has written, but it does not explicitly say that he is the author of those papers or that he wrote such-and-such when he was working at a particular university. In fact, the Web was envisaged to be much more, as one can see from the image in Fig. 1 which is taken from Tim Berners Lee's original outline for the Web in 1989, entitled "Information Management: A Proposal". In this, all the resources are connected by links describing the type of relationships, e.g. "wrote", "describe", "refers to", etc. This is a precursor to the Semantic Web which we will come back to later.
  6. Resource Description Framework (RDF) : Concepts and Abstract Syntax (2004) 0.03
    0.025543412 = product of:
      0.12771706 = sum of:
        0.12771706 = weight(_text_:link in 3067) [ClassicSimilarity], result of:
          0.12771706 = score(doc=3067,freq=2.0), product of:
            0.2711644 = queryWeight, product of:
              5.3287 = idf(docFreq=582, maxDocs=44218)
              0.05088753 = queryNorm
            0.47099498 = fieldWeight in 3067, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.3287 = idf(docFreq=582, maxDocs=44218)
              0.0625 = fieldNorm(doc=3067)
      0.2 = coord(1/5)
    
    Abstract
    The Resource Description Framework (RDF) is a framework for representing information in the Web. RDF Concepts and Abstract Syntax defines an abstract syntax on which RDF is based, and which serves to link its concrete syntax to its formal semantics. It also includes discussion of design goals, key concepts, datatyping, character normalization and handling of URI references.
  7. Guns, R.: Tracing the origins of the semantic web (2013) 0.02
    0.022577403 = product of:
      0.11288701 = sum of:
        0.11288701 = weight(_text_:link in 1093) [ClassicSimilarity], result of:
          0.11288701 = score(doc=1093,freq=4.0), product of:
            0.2711644 = queryWeight, product of:
              5.3287 = idf(docFreq=582, maxDocs=44218)
              0.05088753 = queryNorm
            0.4163047 = fieldWeight in 1093, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.3287 = idf(docFreq=582, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1093)
      0.2 = coord(1/5)
    
    Abstract
    The Semantic Web has been criticized for not being semantic. This article examines the questions of why and how the Web of Data, expressed in the Resource Description Framework (RDF), has come to be known as the Semantic Web. Contrary to previous papers, we deliberately take a descriptive stance and do not start from preconceived ideas about the nature of semantics. Instead, we mainly base our analysis on early design documents of the (Semantic) Web. The main determining factor is shown to be link typing, coupled with the influence of online metadata. Both factors already were present in early web standards and drafts. Our findings indicate that the Semantic Web is directly linked to older artificial intelligence work, despite occasional claims to the contrary. Because of link typing, the Semantic Web can be considered an example of a semantic network. Originally network representations of the meaning of natural language utterances, semantic networks have eventually come to refer to any networks with typed (usually directed) links. We discuss possible causes for this shift and suggest that it may be due to confounding paradigmatic and syntagmatic semantic relations.
  8. Dunsire, G.: FRBR and the Semantic Web (2012) 0.02
    0.022350488 = product of:
      0.111752436 = sum of:
        0.111752436 = weight(_text_:link in 1928) [ClassicSimilarity], result of:
          0.111752436 = score(doc=1928,freq=2.0), product of:
            0.2711644 = queryWeight, product of:
              5.3287 = idf(docFreq=582, maxDocs=44218)
              0.05088753 = queryNorm
            0.4121206 = fieldWeight in 1928, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.3287 = idf(docFreq=582, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1928)
      0.2 = coord(1/5)
    
    Abstract
    Each of the FR family of models has been represented in Resource Description Framework (RDF), the basis of the Semantic Web. This has involved analysis of the entity-relationship diagrams and text of the models to identify and create the RDF classes, properties, definitions and scope notes required. The work has shown that it is possible to seamlessly connect the models within a semantic framework, specifically in the treatment of names, identifiers, and subjects, and link the RDF elements to those in related namespaces.
  9. Dextre Clarke, S.G.: Challenges and opportunities for KOS standards (2007) 0.02
    0.019304762 = product of:
      0.09652381 = sum of:
        0.09652381 = weight(_text_:22 in 4643) [ClassicSimilarity], result of:
          0.09652381 = score(doc=4643,freq=2.0), product of:
            0.17819946 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.05088753 = queryNorm
            0.5416616 = fieldWeight in 4643, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.109375 = fieldNorm(doc=4643)
      0.2 = coord(1/5)
    
    Date
    22. 9.2007 15:41:14
  10. Auer, S.; Bizer, C.; Kobilarov, G.; Lehmann, J.; Cyganiak, R.; Ives, Z.: DBpedia: a nucleus for a Web of open data (2007) 0.02
    0.01915756 = product of:
      0.0957878 = sum of:
        0.0957878 = weight(_text_:link in 4260) [ClassicSimilarity], result of:
          0.0957878 = score(doc=4260,freq=2.0), product of:
            0.2711644 = queryWeight, product of:
              5.3287 = idf(docFreq=582, maxDocs=44218)
              0.05088753 = queryNorm
            0.35324624 = fieldWeight in 4260, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.3287 = idf(docFreq=582, maxDocs=44218)
              0.046875 = fieldNorm(doc=4260)
      0.2 = coord(1/5)
    
    Abstract
    DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against datasets derived from Wikipedia and to link other datasets on the Web to Wikipedia data. We describe the extraction of the DBpedia datasets, and how the resulting information is published on the Web for human- and machineconsumption. We describe some emerging applications from the DBpedia community and show how website authors can facilitate DBpedia content within their sites. Finally, we present the current status of interlinking DBpedia with other open datasets on the Web and outline how DBpedia could serve as a nucleus for an emerging Web of open data.
  11. Harper, C.A.; Tillett, B.B.: Library of Congress controlled vocabularies and their application to the Semantic Web (2006) 0.02
    0.01915756 = product of:
      0.0957878 = sum of:
        0.0957878 = weight(_text_:link in 242) [ClassicSimilarity], result of:
          0.0957878 = score(doc=242,freq=2.0), product of:
            0.2711644 = queryWeight, product of:
              5.3287 = idf(docFreq=582, maxDocs=44218)
              0.05088753 = queryNorm
            0.35324624 = fieldWeight in 242, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.3287 = idf(docFreq=582, maxDocs=44218)
              0.046875 = fieldNorm(doc=242)
      0.2 = coord(1/5)
    
    Abstract
    This article discusses how various controlled vocabularies, classification schemes and thesauri can serve as some of the building blocks of the Semantic Web. These vocabularies have been developed over the course of decades, and can be put to great use in the development of robust web services and Semantic Web technologies. The article covers how initial collaboration between the Semantic Web, Library and Metadata communities are creating partnerships to complete work in this area. It then discusses some cores principles of authority control before talking more specifically about subject and genre vocabularies and name authority. It is hoped that future systems for internationally shared authority data will link the world's authority data from trusted sources to benefit users worldwide. Finally, the article looks at how encoding and markup of vocabularies can help ensure compatibility with the current and future state of Semantic Web development and provides examples of how this work can help improve the findability and navigation of information on the World Wide Web.
  12. Broughton, V.: Automatic metadata generation : Digital resource description without human intervention (2007) 0.02
    0.016546939 = product of:
      0.08273469 = sum of:
        0.08273469 = weight(_text_:22 in 6048) [ClassicSimilarity], result of:
          0.08273469 = score(doc=6048,freq=2.0), product of:
            0.17819946 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.05088753 = queryNorm
            0.46428138 = fieldWeight in 6048, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.09375 = fieldNorm(doc=6048)
      0.2 = coord(1/5)
    
    Date
    22. 9.2007 15:41:14
  13. Tudhope, D.: Knowledge Organization System Services : brief review of NKOS activities and possibility of KOS registries (2007) 0.02
    0.016546939 = product of:
      0.08273469 = sum of:
        0.08273469 = weight(_text_:22 in 100) [ClassicSimilarity], result of:
          0.08273469 = score(doc=100,freq=2.0), product of:
            0.17819946 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.05088753 = queryNorm
            0.46428138 = fieldWeight in 100, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.09375 = fieldNorm(doc=100)
      0.2 = coord(1/5)
    
    Date
    22. 9.2007 15:41:14
  14. Soergel, D.: SemWeb: proposal for an open, multifunctional, multilingual system for integrated access to knowledge about concepts and terminology (1996) 0.02
    0.015964633 = product of:
      0.079823166 = sum of:
        0.079823166 = weight(_text_:link in 3575) [ClassicSimilarity], result of:
          0.079823166 = score(doc=3575,freq=2.0), product of:
            0.2711644 = queryWeight, product of:
              5.3287 = idf(docFreq=582, maxDocs=44218)
              0.05088753 = queryNorm
            0.29437187 = fieldWeight in 3575, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.3287 = idf(docFreq=582, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3575)
      0.2 = coord(1/5)
    
    Abstract
    Presents a proposal for the long-range development of an open, multifunctional, multilingual system for integrated access to many kinds of knowledge about concepts and terminology. The system would draw on existing knowledge bases that are accessible through the Internet or on CD-ROM and on a common integrated distributed knowledge base that would grow incrementally over time. Existing knowledge bases would be accessed througha common interface that would search several knowledge bases, collate the data into a common format, and present them to the user. The common integrated distributed knowldge base would provide an environment in which many contributors could carry out classification and terminological projects more efficiently, with the results available in a common format. Over time, data from other knowledge bases could be incorporated into the common knowledge base, either by actual transfer (provided the knowledge base producers are willing) or by reference through a link. Either way, such incorporation requires intellectual work but allows for tighter integration than common interface access to multiple knowledge bases. Each piece of information in the common knowledge base will have all its sources attached, providing an acknowledgment mechanism that gives due credit to all contributors. The whole system would be designed to be usable by many levels of users for improved information exchange.
  15. Soergel, D.: SemWeb: Proposal for an Open, multifunctional, multilingual system for integrated access to knowledge about concepts and terminology : exploration and development of the concept (1996) 0.02
    0.015964633 = product of:
      0.079823166 = sum of:
        0.079823166 = weight(_text_:link in 3576) [ClassicSimilarity], result of:
          0.079823166 = score(doc=3576,freq=2.0), product of:
            0.2711644 = queryWeight, product of:
              5.3287 = idf(docFreq=582, maxDocs=44218)
              0.05088753 = queryNorm
            0.29437187 = fieldWeight in 3576, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.3287 = idf(docFreq=582, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3576)
      0.2 = coord(1/5)
    
    Abstract
    This paper presents a proposal for the long-range development of an open, multifunctional, multilingual system for integrated access to many kinds of knowledge about concepts and terminology. The system would draw on existing knowledge bases that are accessible through the Internet or on CD-ROM an on a common integrated distributed knowledge base that would grow incrementally over time. Existing knowledge bases would be accessed through a common interface that would search several knowledge bases, collate the data into a common format, and present them to the user. The common integrated distributed knowledge base would provide an environment in which many contributors could carry out classification and terminological projects more efficiently, with the results available in a common format. Over time, data from other knowledge bases could be incorporated into the common knowledge base, either by actual transfer (provided the knowledge base producers are willing) or by reference through a link. Either way, such incorporation requires intellectual work but allows for tighter integration than common interface access to multiple knowledge bases. Each piece of information in the common knowledge base will have all its sources attached, providing an acknowledgment mechanism that gives due credit to all contributors. The whole system woul be designed to be usable by many levels of users for improved information exchange.
  16. Papadakis, I. et al.: Highlighting timely information in libraries through social and semantic Web technologies (2016) 0.01
    0.013789116 = product of:
      0.06894558 = sum of:
        0.06894558 = weight(_text_:22 in 2090) [ClassicSimilarity], result of:
          0.06894558 = score(doc=2090,freq=2.0), product of:
            0.17819946 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.05088753 = queryNorm
            0.38690117 = fieldWeight in 2090, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=2090)
      0.2 = coord(1/5)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  17. Radhakrishnan, A.: Swoogle : an engine for the Semantic Web (2007) 0.01
    0.012771706 = product of:
      0.06385853 = sum of:
        0.06385853 = weight(_text_:link in 4709) [ClassicSimilarity], result of:
          0.06385853 = score(doc=4709,freq=2.0), product of:
            0.2711644 = queryWeight, product of:
              5.3287 = idf(docFreq=582, maxDocs=44218)
              0.05088753 = queryNorm
            0.23549749 = fieldWeight in 4709, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.3287 = idf(docFreq=582, maxDocs=44218)
              0.03125 = fieldNorm(doc=4709)
      0.2 = coord(1/5)
    
    Content
    "Swoogle, the Semantic web search engine, is a research project carried out by the ebiquity research group in the Computer Science and Electrical Engineering Department at the University of Maryland. It's an engine tailored towards finding documents on the semantic web. The whole research paper is available here. Semantic web is touted as the next generation of online content representation where the web documents are represented in a language that is not only easy for humans but is machine readable (easing the integration of data as never thought possible) as well. And the main elements of the semantic web include data model description formats such as Resource Description Framework (RDF), a variety of data interchange formats (e.g. RDF/XML, Turtle, N-Triples), and notations such as RDF Schema (RDFS), the Web Ontology Language (OWL), all of which are intended to provide a formal description of concepts, terms, and relationships within a given knowledge domain (Wikipedia). And Swoogle is an attempt to mine and index this new set of web documents. The engine performs crawling of semantic documents like most web search engines and the search is available as web service too. The engine is primarily written in Java with the PHP used for the front-end and MySQL for database. Swoogle is capable of searching over 10,000 ontologies and indexes more that 1.3 million web documents. It also computes the importance of a Semantic Web document. The techniques used for indexing are the more google-type page ranking and also mining the documents for inter-relationships that are the basis for the semantic web. For more information on how the RDF framework can be used to relate documents, read the link here. Being a research project, and with a non-commercial motive, there is not much hype around Swoogle. However, the approach to indexing of Semantic web documents is an approach that most engines will have to take at some point of time. When the Internet debuted, there were no specific engines available for indexing or searching. The Search domain only picked up as more and more content became available. One fundamental question that I've always wondered about it is - provided that the search engines return very relevant results for a query - how to ascertain that the documents are indeed the most relevant ones available. There is always an inherent delay in indexing of document. Its here that the new semantic documents search engines can close delay. Experimenting with the concept of Search in the semantic web can only bore well for the future of search technology."
  18. Semantic search over the Web (2012) 0.01
    0.012771706 = product of:
      0.06385853 = sum of:
        0.06385853 = weight(_text_:link in 411) [ClassicSimilarity], result of:
          0.06385853 = score(doc=411,freq=2.0), product of:
            0.2711644 = queryWeight, product of:
              5.3287 = idf(docFreq=582, maxDocs=44218)
              0.05088753 = queryNorm
            0.23549749 = fieldWeight in 411, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.3287 = idf(docFreq=582, maxDocs=44218)
              0.03125 = fieldNorm(doc=411)
      0.2 = coord(1/5)
    
    Content
    Inhalt: Introduction.- Part I Introduction to Web of Data.- Topology of the Web of Data.- Storing and Indexing Massive RDF Data Sets.- Designing Exploratory Search Applications upon Web Data Sources.- Part II Search over the Web.- Path-oriented Keyword Search query over RDF.- Interactive Query Construction for Keyword Search on the SemanticWeb.- Understanding the Semantics of Keyword Queries on Relational DataWithout Accessing the Instance.- Keyword-Based Search over Semantic Data.- Semantic Link Discovery over Relational Data.- Embracing Uncertainty in Entity Linking.- The Return of the Entity-Relationship Model: Ontological Query Answering.- Linked Data Services and Semantics-enabled Mashup.- Part III Linked Data Search engines.- A Recommender System for Linked Data.- Flint: from Web Pages to Probabilistic Semantic Data.- Searching and Browsing Linked Data with SWSE.
  19. Brambilla, M.; Ceri, S.: Designing exploratory search applications upon Web data sources (2012) 0.01
    0.012771706 = product of:
      0.06385853 = sum of:
        0.06385853 = weight(_text_:link in 428) [ClassicSimilarity], result of:
          0.06385853 = score(doc=428,freq=2.0), product of:
            0.2711644 = queryWeight, product of:
              5.3287 = idf(docFreq=582, maxDocs=44218)
              0.05088753 = queryNorm
            0.23549749 = fieldWeight in 428, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.3287 = idf(docFreq=582, maxDocs=44218)
              0.03125 = fieldNorm(doc=428)
      0.2 = coord(1/5)
    
    Abstract
    Search is the preferred method to access information in today's computing systems. The Web, accessed through search engines, is universally recognized as the source for answering users' information needs. However, offering a link to a Web page does not cover all information needs. Even simple problems, such as "Which theater offers an at least three-stars action movie in London close to a good Italian restaurant," can only be solved by searching the Web multiple times, e.g., by extracting a list of the recent action movies filtered by ranking, then looking for movie theaters, then looking for Italian restaurants close to them. While search engines hint to useful information, the user's brain is the fundamental platform for information integration. An important trend is the availability of new, specialized data sources-the so-called "long tail" of the Web of data. Such carefully collected and curated data sources can be much more valuable than information currently available in Web pages; however, many sources remain hidden or insulated, in the lack of software solutions for bringing them to surface and making them usable in the search context. A new class of tailor-made systems, designed to satisfy the needs of users with specific aims, will support the publishing and integration of data sources for vertical domains; the user will be able to select sources based on individual or collective trust, and systems will be able to route queries to such sources and to provide easyto-use interfaces for combining them within search strategies, at the same time, rewarding the data source owners for each contribution to effective search. Efforts such as Google's Fusion Tables show that the technology for bringing hidden data sources to surface is feasible.
  20. Heery, R.; Wagner, H.: ¬A metadata registry for the Semantic Web (2002) 0.01
    0.011175244 = product of:
      0.055876218 = sum of:
        0.055876218 = weight(_text_:link in 1210) [ClassicSimilarity], result of:
          0.055876218 = score(doc=1210,freq=2.0), product of:
            0.2711644 = queryWeight, product of:
              5.3287 = idf(docFreq=582, maxDocs=44218)
              0.05088753 = queryNorm
            0.2060603 = fieldWeight in 1210, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.3287 = idf(docFreq=582, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1210)
      0.2 = coord(1/5)
    
    Abstract
    * Agencies maintaining directories of data elements in a domain area in accordance with ISO/IEC 11179 (This standard specifies good practice for data element definition as well as the registration process. Example implementations are the National Health Information Knowledgebase hosted by the Australian Institute of Health and Welfare and the Environmental Data Registry hosted by the US Environmental Protection Agency.); * The xml.org directory of the Extended Markup Language (XML) document specifications facilitating re-use of Document Type Definition (DTD), hosted by the Organization for the Advancement of Structured Information Standards (OASIS); * The MetaForm database of Dublin Core usage and mappings maintained at the State and University Library in Goettingen; * The Semantic Web Agreement Group Dictionary, a database of terms for the Semantic Web that can be referred to by humans and software agents; * LEXML, a multi-lingual and multi-jurisdictional RDF Dictionary for the legal world; * The SCHEMAS registry maintained by the European Commission funded SCHEMAS project, which indexes several metadata element sets as well as a large number of activity reports describing metadata related activities and initiatives. Metadata registries essentially provide an index of terms. Given the distributed nature of the Web, there are a number of ways this can be accomplished. For example, the registry could link to terms and definitions in schemas published by implementers and stored locally by the schema maintainer. Alternatively, the registry might harvest various metadata schemas from their maintainers. Registries provide 'added value' to users by indexing schemas relevant to a particular 'domain' or 'community of use' and by simplifying the navigation of terms by enabling multiple schemas to be accessed from one view. An important benefit of this approach is an increase in the reuse of existing terms, rather than users having to reinvent them. Merging schemas to one view leads to harmonization between applications and helps avoid duplication of effort. Additionally, the establishment of registries to index terms actively being used in local implementations facilitates the metadata standards activity by providing implementation experience transferable to the standards-making process.

Languages

  • e 40
  • d 6

Types

  • a 28
  • el 14
  • m 7
  • s 3
  • n 1
  • x 1
  • More… Less…