Search (48 results, page 1 of 3)

  • × theme_ss:"Semantic Web"
  1. Faaborg, A.; Lagoze, C.: Semantic browsing (2003) 0.04
    0.035841778 = product of:
      0.107525334 = sum of:
        0.107525334 = sum of:
          0.063689396 = weight(_text_:project in 1026) [ClassicSimilarity], result of:
            0.063689396 = score(doc=1026,freq=2.0), product of:
              0.19509704 = queryWeight, product of:
                4.220981 = idf(docFreq=1764, maxDocs=44218)
                0.04622078 = queryNorm
              0.32644984 = fieldWeight in 1026, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.220981 = idf(docFreq=1764, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1026)
          0.04383594 = weight(_text_:22 in 1026) [ClassicSimilarity], result of:
            0.04383594 = score(doc=1026,freq=2.0), product of:
              0.16185729 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04622078 = queryNorm
              0.2708308 = fieldWeight in 1026, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1026)
      0.33333334 = coord(1/3)
    
    Abstract
    We have created software applications that allow users to both author and use Semantic Web metadata. To create and use a layer of semantic content on top of the existing Web, we have (1) implemented a user interface that expedites the task of attributing metadata to resources on the Web, and (2) augmented a Web browser to leverage this semantic metadata to provide relevant information and tasks to the user. This project provides a framework for annotating and reorganizing existing files, pages, and sites on the Web that is similar to Vannevar Bushrsquos original concepts of trail blazing and associative indexing.
    Source
    Research and advanced technology for digital libraries : 7th European Conference, proceedings / ECDL 2003, Trondheim, Norway, August 17-22, 2003
  2. Metadata and semantics research : 7th Research Conference, MTSR 2013 Thessaloniki, Greece, November 19-22, 2013. Proceedings (2013) 0.03
    0.025343968 = product of:
      0.0760319 = sum of:
        0.0760319 = sum of:
          0.045035206 = weight(_text_:project in 1155) [ClassicSimilarity], result of:
            0.045035206 = score(doc=1155,freq=4.0), product of:
              0.19509704 = queryWeight, product of:
                4.220981 = idf(docFreq=1764, maxDocs=44218)
                0.04622078 = queryNorm
              0.2308349 = fieldWeight in 1155, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.220981 = idf(docFreq=1764, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1155)
          0.030996693 = weight(_text_:22 in 1155) [ClassicSimilarity], result of:
            0.030996693 = score(doc=1155,freq=4.0), product of:
              0.16185729 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04622078 = queryNorm
              0.19150631 = fieldWeight in 1155, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1155)
      0.33333334 = coord(1/3)
    
    Abstract
    The MTSR 2013 program and the contents of these proceedings show a rich diversity of research and practices, drawing on problems from metadata and semantically focused tools and technologies, linked data, cross-language semantics, ontologies, metadata models, and semantic system and metadata standards. The general session of the conference included 18 papers covering a broad spectrum of topics, proving the interdisciplinary field of metadata, and was divided into three main themes: platforms for research data sets, system architecture and data management; metadata and ontology validation, evaluation, mapping and interoperability; and content management. Metadata as a research topic is maturing, and the conference also supported the following five tracks: Metadata and Semantics for Open Repositories, Research Information Systems and Data Infrastructures; Metadata and Semantics for Cultural Collections and Applications; Metadata and Semantics for Agriculture, Food and Environment; Big Data and Digital Libraries in Health, Science and Technology; and European and National Projects, and Project Networking. Each track had a rich selection of papers, giving broader diversity to MTSR, and enabling deeper exploration of significant topics.
    All the papers underwent a thorough and rigorous peer-review process. The review and selection this year was highly competitive and only papers containing significant research results, innovative methods, or novel and best practices were accepted for publication. Only 29 of 89 submissions were accepted as full papers, representing 32.5% of the total number of submissions. Additional contributions covering noteworthy and important results in special tracks or project reports were accepted, totaling 42 accepted contributions. This year's conference included two outstanding keynote speakers. Dr. Stefan Gradmann, a professor arts department of KU Leuven (Belgium) and director of university library, addressed semantic research drawing from his work with Europeana. The title of his presentation was, "Towards a Semantic Research Library: Digital Humanities Research, Europeana and the Linked Data Paradigm". Dr. Michail Salampasis, associate professor from our conference host institution, the Department of Informatics of the Alexander TEI of Thessaloniki, presented new potential, intersecting search and linked data. The title of his talk was, "Rethinking the Search Experience: What Could Professional Search Systems Do Better?"
    Date
    17.12.2013 12:51:22
  3. Shoffner, M.; Greenberg, J.; Kramer-Duffield, J.; Woodbury, D.: Web 2.0 semantic systems : collaborative learning in science (2008) 0.02
    0.020481016 = product of:
      0.06144305 = sum of:
        0.06144305 = sum of:
          0.03639394 = weight(_text_:project in 2661) [ClassicSimilarity], result of:
            0.03639394 = score(doc=2661,freq=2.0), product of:
              0.19509704 = queryWeight, product of:
                4.220981 = idf(docFreq=1764, maxDocs=44218)
                0.04622078 = queryNorm
              0.18654276 = fieldWeight in 2661, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.220981 = idf(docFreq=1764, maxDocs=44218)
                0.03125 = fieldNorm(doc=2661)
          0.025049109 = weight(_text_:22 in 2661) [ClassicSimilarity], result of:
            0.025049109 = score(doc=2661,freq=2.0), product of:
              0.16185729 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04622078 = queryNorm
              0.15476047 = fieldWeight in 2661, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2661)
      0.33333334 = coord(1/3)
    
    Abstract
    The basic goal of education within a discipline is to transform a novice into an expert. This entails moving the novice toward the "semantic space" that the expert inhabits-the space of concepts, meanings, vocabularies, and other intellectual constructs that comprise the discipline. Metadata is significant to this goal in digitally mediated education environments. Encoding the experts' semantic space not only enables the sharing of semantics among discipline scientists, but also creates an environment that bridges the semantic gap between the common vocabulary of the novice and the granular descriptive language of the seasoned scientist (Greenberg, et al, 2005). Developments underlying the Semantic Web, where vocabularies are formalized in the Web Ontology Language (OWL), and Web 2.0 approaches of user-generated folksonomies provide an infrastructure for linking vocabulary systems and promoting group learning via metadata literacy. Group learning is a pedagogical approach to teaching that harnesses the phenomenon of "collective intelligence" to increase learning by means of collaboration. Learning a new semantic system can be daunting for a novice, and yet it is integral to advance one's knowledge in a discipline and retain interest. These ideas are key to the "BOT 2.0: Botany through Web 2.0, the Memex and Social Learning" project (Bot 2.0).72 Bot 2.0 is a collaboration involving the North Carolina Botanical Garden, the UNC SILS Metadata Research center, and the Renaissance Computing Institute (RENCI). Bot 2.0 presents a curriculum utilizing a memex as a way for students to link and share digital information, working asynchronously in an environment beyond the traditional classroom. Our conception of a memex is not a centralized black box but rather a flexible, distributed framework that uses the most salient and easiest-to-use collaborative platforms (e.g., Facebook, Flickr, wiki and blog technology) for personal information management. By meeting students "where they live" digitally, we hope to attract students to the study of botanical science. A key aspect is to teach students scientific terminology and about the value of metadata, an inherent function in several of the technologies and in the instructional approach we are utilizing. This poster will report on a study examining the value of both folksonomies and taxonomies for post-secondary college students learning plant identification. Our data is drawn from a curriculum involving a virtual independent learning portion and a "BotCamp" weekend at UNC, where students work with digital plan specimens that they have captured. Results provide some insight into the importance of collaboration and shared vocabulary for gaining confidence and for student progression from novice to expert in botany.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  4. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.02
    0.016313523 = product of:
      0.048940565 = sum of:
        0.048940565 = product of:
          0.1468217 = sum of:
            0.1468217 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.1468217 = score(doc=701,freq=2.0), product of:
                0.39186028 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04622078 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  5. Dextre Clarke, S.G.: Challenges and opportunities for KOS standards (2007) 0.01
    0.014611981 = product of:
      0.04383594 = sum of:
        0.04383594 = product of:
          0.08767188 = sum of:
            0.08767188 = weight(_text_:22 in 4643) [ClassicSimilarity], result of:
              0.08767188 = score(doc=4643,freq=2.0), product of:
                0.16185729 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04622078 = queryNorm
                0.5416616 = fieldWeight in 4643, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4643)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 9.2007 15:41:14
  6. Broughton, V.: Automatic metadata generation : Digital resource description without human intervention (2007) 0.01
    0.0125245545 = product of:
      0.03757366 = sum of:
        0.03757366 = product of:
          0.07514732 = sum of:
            0.07514732 = weight(_text_:22 in 6048) [ClassicSimilarity], result of:
              0.07514732 = score(doc=6048,freq=2.0), product of:
                0.16185729 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04622078 = queryNorm
                0.46428138 = fieldWeight in 6048, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6048)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 9.2007 15:41:14
  7. Tudhope, D.: Knowledge Organization System Services : brief review of NKOS activities and possibility of KOS registries (2007) 0.01
    0.0125245545 = product of:
      0.03757366 = sum of:
        0.03757366 = product of:
          0.07514732 = sum of:
            0.07514732 = weight(_text_:22 in 100) [ClassicSimilarity], result of:
              0.07514732 = score(doc=100,freq=2.0), product of:
                0.16185729 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04622078 = queryNorm
                0.46428138 = fieldWeight in 100, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=100)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 9.2007 15:41:14
  8. Luo, Y.; Picalausa, F.; Fletcher, G.H.L.; Hidders, J.; Vansummeren, S.: Storing and indexing massive RDF datasets (2012) 0.01
    0.010722669 = product of:
      0.032168005 = sum of:
        0.032168005 = product of:
          0.06433601 = sum of:
            0.06433601 = weight(_text_:project in 414) [ClassicSimilarity], result of:
              0.06433601 = score(doc=414,freq=4.0), product of:
                0.19509704 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.04622078 = queryNorm
                0.32976416 = fieldWeight in 414, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=414)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The resource description framework (RDF for short) provides a flexible method for modeling information on the Web [34,40]. All data items in RDF are uniformly represented as triples of the form (subject, predicate, object), sometimes also referred to as (subject, property, value) triples. As a running example for this chapter, a small fragment of an RDF dataset concerning music and music fans is given in Fig. 2.1. Spurred by efforts like the Linking Open Data project, increasingly large volumes of data are being published in RDF. Notable contributors in this respect include areas as diverse as the government, the life sciences, Web 2.0 communities, and so on. To give an idea of the volumes of RDF data concerned, as of September 2012, there are 31,634,213,770 triples in total published by data sources participating in the Linking Open Data project. Many individual data sources (like, e.g., PubMed, DBpedia, MusicBrainz) contain hundreds of millions of triples (797, 672, and 179 millions, respectively). These large volumes of RDF data motivate the need for scalable native RDF data management solutions capabable of efficiently storing, indexing, and querying RDF data. In this chapter, we present a general and up-to-date survey of the current state of the art in RDF storage and indexing.
  9. Li, Z.: ¬A domain specific search engine with explicit document relations (2013) 0.01
    0.010722669 = product of:
      0.032168005 = sum of:
        0.032168005 = product of:
          0.06433601 = sum of:
            0.06433601 = weight(_text_:project in 1210) [ClassicSimilarity], result of:
              0.06433601 = score(doc=1210,freq=4.0), product of:
                0.19509704 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.04622078 = queryNorm
                0.32976416 = fieldWeight in 1210, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1210)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The current web consists of documents that are highly heterogeneous and hard for machines to understand. The Semantic Web is a progressive movement of the Word Wide Web, aiming at converting the current web of unstructured documents to the web of data. In the Semantic Web, web documents are annotated with metadata using standardized ontology language. These annotated documents are directly processable by machines and it highly improves their usability and usefulness. In Ericsson, similar problems occur. There are massive documents being created with well-defined structures. Though these documents are about domain specific knowledge and can have rich relations, they are currently managed by a traditional search engine, which ignores the rich domain specific information and presents few data to users. Motivated by the Semantic Web, we aim to find standard ways to process these documents, extract rich domain specific information and annotate these data to documents with formal markup languages. We propose this project to develop a domain specific search engine for processing different documents and building explicit relations for them. This research project consists of the three main focuses: examining different domain specific documents and finding ways to extract their metadata; integrating a text search engine with an ontology server; exploring novel ways to build relations for documents. We implement this system and demonstrate its functions. As a prototype, the system provides required features and will be extended in the future.
  10. Papadakis, I. et al.: Highlighting timely information in libraries through social and semantic Web technologies (2016) 0.01
    0.01043713 = product of:
      0.03131139 = sum of:
        0.03131139 = product of:
          0.06262278 = sum of:
            0.06262278 = weight(_text_:22 in 2090) [ClassicSimilarity], result of:
              0.06262278 = score(doc=2090,freq=2.0), product of:
                0.16185729 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04622078 = queryNorm
                0.38690117 = fieldWeight in 2090, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2090)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  11. Auer, S.; Lehmann, J.: What have Innsbruck and Leipzig in common? : extracting semantics from Wiki content (2007) 0.01
    0.009098486 = product of:
      0.027295457 = sum of:
        0.027295457 = product of:
          0.054590914 = sum of:
            0.054590914 = weight(_text_:project in 2481) [ClassicSimilarity], result of:
              0.054590914 = score(doc=2481,freq=2.0), product of:
                0.19509704 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.04622078 = queryNorm
                0.27981415 = fieldWeight in 2481, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2481)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Wikis are established means for the collaborative authoring, versioning and publishing of textual articles. The Wikipedia project, for example, succeeded in creating the by far largest encyclopedia just on the basis of a wiki. Recently, several approaches have been proposed on how to extend wikis to allow the creation of structured and semantically enriched content. However, the means for creating semantically enriched structured content are already available and are, although unconsciously, even used by Wikipedia authors. In this article, we present a method for revealing this structured content by extracting information from template instances. We suggest ways to efficiently query the vast amount of extracted information (e.g. more than 8 million RDF statements for the English Wikipedia version alone), leading to astonishing query answering possibilities (such as for the title question). We analyze the quality of the extracted content, and propose strategies for quality improvements with just minor modifications of the wiki systems being currently used.
  12. Vatant, B.: Porting library vocabularies to the Semantic Web, and back : a win-win round trip (2010) 0.01
    0.009098486 = product of:
      0.027295457 = sum of:
        0.027295457 = product of:
          0.054590914 = sum of:
            0.054590914 = weight(_text_:project in 3968) [ClassicSimilarity], result of:
              0.054590914 = score(doc=3968,freq=2.0), product of:
                0.19509704 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.04622078 = queryNorm
                0.27981415 = fieldWeight in 3968, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3968)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The role of vocabularies is critical in the long overdue synergy between the Web and Library heritage. The Semantic Web should leverage existing vocabularies instead of reinventing them, but the specific features of library vocabularies make them more or less portable to the Semantic Web. Based on preliminary results in the framework of the TELplus project, we suggest guidelines for needed evolutions in order to make vocabularies usable and efficient in the Semantic Web realm, assess choices made so far by large libraries to publish vocabularies conformant to standards and good practices, and review how Semantic Web tools can help managing those vocabularies.
  13. Binding, C.; Tudhope, D.: Terminology Web services (2010) 0.01
    0.009098486 = product of:
      0.027295457 = sum of:
        0.027295457 = product of:
          0.054590914 = sum of:
            0.054590914 = weight(_text_:project in 4067) [ClassicSimilarity], result of:
              0.054590914 = score(doc=4067,freq=2.0), product of:
                0.19509704 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.04622078 = queryNorm
                0.27981415 = fieldWeight in 4067, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4067)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Controlled terminologies such as classification schemes, name authorities, and thesauri have long been the domain of the library and information science community. Although historically there have been initiatives towards library style classification of web resources, there remain significant problems with searching and quality judgement of online content. Terminology services can play a key role in opening up access to these valuable resources. By exposing controlled terminologies via a web service, organisations maintain data integrity and version control, whilst motivating external users to design innovative ways to present and utilise their data. We introduce terminology web services and review work in the area. We describe the approaches taken in establishing application programming interfaces (API) and discuss the comparative benefits of a dedicated terminology web service versus general purpose programming languages. We discuss experiences at Glamorgan in creating terminology web services and associated client interface components, in particular for the archaeology domain in the STAR (Semantic Technologies for Archaeological Resources) Project.
  14. Metadata and semantics research : 5th International Conference, MTSR 2011, Izmir, Turkey, October 12-14, 2011. Proceedings (2011) 0.01
    0.009098486 = product of:
      0.027295457 = sum of:
        0.027295457 = product of:
          0.054590914 = sum of:
            0.054590914 = weight(_text_:project in 1152) [ClassicSimilarity], result of:
              0.054590914 = score(doc=1152,freq=2.0), product of:
                0.19509704 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.04622078 = queryNorm
                0.27981415 = fieldWeight in 1152, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1152)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This volume constitutes the selected papers of the 5th International Conference on Metadata and Semantics Research, MTSR 2011, held in Izmir, Turkey, in October 2011. The 36 full papers presented together with 16 short papers and project reports were carefully reviewed and selected from 118 submissions. The papers are organized in topical sections on Tracks on Metadata and Semantics for Open Access Repositories and Infrastructures, Metadata and Semantics for Learning Infrastructures, Metadata and Semantics for Cultural Collections and Applications, Metadata and Semantics for Agriculture, Food and Environment.
  15. Radhakrishnan, A.: Swoogle : an engine for the Semantic Web (2007) 0.01
    0.008578135 = product of:
      0.025734404 = sum of:
        0.025734404 = product of:
          0.05146881 = sum of:
            0.05146881 = weight(_text_:project in 4709) [ClassicSimilarity], result of:
              0.05146881 = score(doc=4709,freq=4.0), product of:
                0.19509704 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.04622078 = queryNorm
                0.26381132 = fieldWeight in 4709, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4709)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Content
    "Swoogle, the Semantic web search engine, is a research project carried out by the ebiquity research group in the Computer Science and Electrical Engineering Department at the University of Maryland. It's an engine tailored towards finding documents on the semantic web. The whole research paper is available here. Semantic web is touted as the next generation of online content representation where the web documents are represented in a language that is not only easy for humans but is machine readable (easing the integration of data as never thought possible) as well. And the main elements of the semantic web include data model description formats such as Resource Description Framework (RDF), a variety of data interchange formats (e.g. RDF/XML, Turtle, N-Triples), and notations such as RDF Schema (RDFS), the Web Ontology Language (OWL), all of which are intended to provide a formal description of concepts, terms, and relationships within a given knowledge domain (Wikipedia). And Swoogle is an attempt to mine and index this new set of web documents. The engine performs crawling of semantic documents like most web search engines and the search is available as web service too. The engine is primarily written in Java with the PHP used for the front-end and MySQL for database. Swoogle is capable of searching over 10,000 ontologies and indexes more that 1.3 million web documents. It also computes the importance of a Semantic Web document. The techniques used for indexing are the more google-type page ranking and also mining the documents for inter-relationships that are the basis for the semantic web. For more information on how the RDF framework can be used to relate documents, read the link here. Being a research project, and with a non-commercial motive, there is not much hype around Swoogle. However, the approach to indexing of Semantic web documents is an approach that most engines will have to take at some point of time. When the Internet debuted, there were no specific engines available for indexing or searching. The Search domain only picked up as more and more content became available. One fundamental question that I've always wondered about it is - provided that the search engines return very relevant results for a query - how to ascertain that the documents are indeed the most relevant ones available. There is always an inherent delay in indexing of document. Its here that the new semantic documents search engines can close delay. Experimenting with the concept of Search in the semantic web can only bore well for the future of search technology."
  16. Synak, M.; Dabrowski, M.; Kruk, S.R.: Semantic Web and ontologies (2009) 0.01
    0.008349704 = product of:
      0.025049109 = sum of:
        0.025049109 = product of:
          0.050098218 = sum of:
            0.050098218 = weight(_text_:22 in 3376) [ClassicSimilarity], result of:
              0.050098218 = score(doc=3376,freq=2.0), product of:
                0.16185729 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04622078 = queryNorm
                0.30952093 = fieldWeight in 3376, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3376)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    31. 7.2010 16:58:22
  17. Eckert, K.: SKOS: eine Sprache für die Übertragung von Thesauri ins Semantic Web (2011) 0.01
    0.008349704 = product of:
      0.025049109 = sum of:
        0.025049109 = product of:
          0.050098218 = sum of:
            0.050098218 = weight(_text_:22 in 4331) [ClassicSimilarity], result of:
              0.050098218 = score(doc=4331,freq=2.0), product of:
                0.16185729 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04622078 = queryNorm
                0.30952093 = fieldWeight in 4331, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4331)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    15. 3.2011 19:21:22
  18. OWL Web Ontology Language Test Cases (2004) 0.01
    0.008349704 = product of:
      0.025049109 = sum of:
        0.025049109 = product of:
          0.050098218 = sum of:
            0.050098218 = weight(_text_:22 in 4685) [ClassicSimilarity], result of:
              0.050098218 = score(doc=4685,freq=2.0), product of:
                0.16185729 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04622078 = queryNorm
                0.30952093 = fieldWeight in 4685, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4685)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    14. 8.2011 13:33:22
  19. Bizer, C.; Lehmann, J.; Kobilarov, G.; Auer, S.; Becker, C.; Cyganiak, R.; Hellmann, S.: DBpedia: a crystallization point for the Web of Data. (2009) 0.01
    0.0075820712 = product of:
      0.022746213 = sum of:
        0.022746213 = product of:
          0.045492426 = sum of:
            0.045492426 = weight(_text_:project in 1643) [ClassicSimilarity], result of:
              0.045492426 = score(doc=1643,freq=2.0), product of:
                0.19509704 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.04622078 = queryNorm
                0.23317845 = fieldWeight in 1643, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1643)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The DBpedia project is a community effort to extract structured information from Wikipedia and to make this information accessible on the Web. The resulting DBpedia knowledge base currently describes over 2.6 million entities. For each of these entities, DBpedia defines a globally unique identifier that can be dereferenced over the Web into a rich RDF description of the entity, including human-readable definitions in 30 languages, relationships to other resources, classifications in four concept hierarchies, various facts as well as data-level links to other Web data sources describing the entity. Over the last year, an increasing number of data publishers have begun to set data-level links to DBpedia resources, making DBpedia a central interlinking hub for the emerging Web of data. Currently, the Web of interlinked data sources around DBpedia provides approximately 4.7 billion pieces of information and covers domains suc as geographic information, people, companies, films, music, genes, drugs, books, and scientific publications. This article describes the extraction of the DBpedia knowledge base, the current status of interlinking DBpedia with other data sources on the Web, and gives an overview of applications that facilitate the Web of Data around DBpedia.
  20. Mayr, P.; Mutschke, P.; Petras, V.: Reducing semantic complexity in distributed digital libraries : Treatment of term vagueness and document re-ranking (2008) 0.01
    0.0075820712 = product of:
      0.022746213 = sum of:
        0.022746213 = product of:
          0.045492426 = sum of:
            0.045492426 = weight(_text_:project in 1909) [ClassicSimilarity], result of:
              0.045492426 = score(doc=1909,freq=2.0), product of:
                0.19509704 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.04622078 = queryNorm
                0.23317845 = fieldWeight in 1909, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1909)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - The general science portal "vascoda" merges structured, high-quality information collections from more than 40 providers on the basis of search engine technology (FAST) and a concept which treats semantic heterogeneity between different controlled vocabularies. First experiences with the portal show some weaknesses of this approach which come out in most metadata-driven Digital Libraries (DLs) or subject specific portals. The purpose of the paper is to propose models to reduce the semantic complexity in heterogeneous DLs. The aim is to introduce value-added services (treatment of term vagueness and document re-ranking) that gain a certain quality in DLs if they are combined with heterogeneity components established in the project "Competence Center Modeling and Treatment of Semantic Heterogeneity". Design/methodology/approach - Two methods, which are derived from scientometrics and network analysis, will be implemented with the objective to re-rank result sets by the following structural properties: the ranking of the results by core journals (so-called Bradfordizing) and ranking by centrality of authors in co-authorship networks. Findings - The methods, which will be implemented, focus on the query and on the result side of a search and are designed to positively influence each other. Conceptually, they will improve the search quality and guarantee that the most relevant documents in result sets will be ranked higher. Originality/value - The central impact of the paper focuses on the integration of three structural value-adding methods, which aim at reducing the semantic complexity represented in distributed DLs at several stages in the information retrieval process: query construction, search and ranking and re-ranking.

Years

Languages

  • e 41
  • d 6

Types

  • a 25
  • el 17
  • m 8
  • s 4
  • x 2
  • n 1
  • More… Less…