Search (62 results, page 1 of 4)

  • × theme_ss:"Semantic Web"
  1. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.05
    0.049288586 = product of:
      0.14786576 = sum of:
        0.03696644 = product of:
          0.110899314 = sum of:
            0.110899314 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.110899314 = score(doc=701,freq=2.0), product of:
                0.2959851 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03491209 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.110899314 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.110899314 = score(doc=701,freq=2.0), product of:
            0.2959851 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03491209 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.33333334 = coord(2/6)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  2. Djioua, B.; Desclés, J.-P.; Alrahabi, M.: Searching and mining with semantic categories (2012) 0.03
    0.027652267 = product of:
      0.0829568 = sum of:
        0.05466523 = weight(_text_:searching in 99) [ClassicSimilarity], result of:
          0.05466523 = score(doc=99,freq=6.0), product of:
            0.14122958 = queryWeight, product of:
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.03491209 = queryNorm
            0.38706642 = fieldWeight in 99, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.0390625 = fieldNorm(doc=99)
        0.028291566 = product of:
          0.056583133 = sum of:
            0.056583133 = weight(_text_:etc in 99) [ClassicSimilarity], result of:
              0.056583133 = score(doc=99,freq=2.0), product of:
                0.18910104 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.03491209 = queryNorm
                0.2992217 = fieldWeight in 99, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=99)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    A new model is proposed to retrieve information by building automatically a semantic metatext structure for texts that allow searching and extracting discourse and semantic information according to certain linguistic categorizations. This paper presents approaches for searching and mining full text with semantic categories. The model is built up from two engines: The first one, called EXCOM (Djioua et al., 2006; Alrahabi, 2010), is an automatic system for text annotation, related to discourse and semantic maps, which are specification of general linguistic ontologies founded on the Applicative and Cognitive Grammar. The annotation layer uses a linguistic method called Contextual Exploration, which handles the polysemic values of a term in texts. Several 'semantic maps' underlying 'point of views' for text mining guide this automatic annotation process. The second engine uses semantic annotated texts, produced previously in order to create a semantic inverted index, which is able to retrieve relevant documents for queries associated with discourse and semantic categories such as definition, quotation, causality, relations between concepts, etc. (Djioua & Desclés, 2007). This semantic indexation process builds a metatext layer for textual contents. Some data and linguistic rules sets as well as the general architecture that extend third-party software are expressed as supplementary information.
  3. Multimedia content and the Semantic Web : methods, standards, and tools (2005) 0.03
    0.02760309 = product of:
      0.08280927 = sum of:
        0.022316987 = weight(_text_:searching in 150) [ClassicSimilarity], result of:
          0.022316987 = score(doc=150,freq=4.0), product of:
            0.14122958 = queryWeight, product of:
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.03491209 = queryNorm
            0.1580192 = fieldWeight in 150, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.01953125 = fieldNorm(doc=150)
        0.060492285 = sum of:
          0.040010322 = weight(_text_:etc in 150) [ClassicSimilarity], result of:
            0.040010322 = score(doc=150,freq=4.0), product of:
              0.18910104 = queryWeight, product of:
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.03491209 = queryNorm
              0.2115817 = fieldWeight in 150, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.01953125 = fieldNorm(doc=150)
          0.020481963 = weight(_text_:22 in 150) [ClassicSimilarity], result of:
            0.020481963 = score(doc=150,freq=6.0), product of:
              0.1222562 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03491209 = queryNorm
              0.16753313 = fieldWeight in 150, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.01953125 = fieldNorm(doc=150)
      0.33333334 = coord(2/6)
    
    Classification
    006.7 22
    Date
    7. 3.2007 19:30:22
    DDC
    006.7 22
    Footnote
    Rez. in: JASIST 58(2007) no.3, S.457-458 (A.M.A. Ahmad): "The concept of the semantic web has emerged because search engines and text-based searching are no longer adequate, as these approaches involve an extensive information retrieval process. The deployed searching and retrieving descriptors arc naturally subjective and their deployment is often restricted to the specific application domain for which the descriptors were configured. The new era of information technology imposes different kinds of requirements and challenges. Automatic extracted audiovisual features are required, as these features are more objective, domain-independent, and more native to audiovisual content. This book is a useful guide for researchers, experts, students, and practitioners; it is a very valuable reference and can lead them through their exploration and research in multimedia content and the semantic web. The book is well organized, and introduces the concept of the semantic web and multimedia content analysis to the reader through a logical sequence from standards and hypotheses through system examples, presenting relevant tools and methods. But in some chapters readers will need a good technical background to understand some of the details. Readers may attain sufficient knowledge here to start projects or research related to the book's theme; recent results and articles related to the active research area of integrating multimedia with semantic web technologies are included. This book includes full descriptions of approaches to specific problem domains such as content search, indexing, and retrieval. This book will be very useful to researchers in the multimedia content analysis field who wish to explore the benefits of emerging semantic web technologies in applying multimedia content approaches. The first part of the book covers the definition of the two basic terms multimedia content and semantic web. The Moving Picture Experts Group standards MPEG7 and MPEG21 are quoted extensively. In addition, the means of multimedia content description are elaborated upon and schematically drawn. This extensive description is introduced by authors who are actively involved in those standards and have been participating in the work of the International Organization for Standardization (ISO)/MPEG for many years. On the other hand, this results in bias against the ad hoc or nonstandard tools for multimedia description in favor of the standard approaches. This is a general book for multimedia content; more emphasis on the general multimedia description and extraction could be provided.
    Semantic web technologies are explained, and ontology representation is emphasized. There is an excellent summary of the fundamental theory behind applying a knowledge-engineering approach to vision problems. This summary represents the concept of the semantic web and multimedia content analysis. A definition of the fuzzy knowledge representation that can be used for realization in multimedia content applications has been provided, with a comprehensive analysis. The second part of the book introduces the multimedia content analysis approaches and applications. In addition, some examples of methods applicable to multimedia content analysis are presented. Multimedia content analysis is a very diverse field and concerns many other research fields at the same time; this creates strong diversity issues, as everything from low-level features (e.g., colors, DCT coefficients, motion vectors, etc.) up to the very high and semantic level (e.g., Object, Events, Tracks, etc.) are involved. The second part includes topics on structure identification (e.g., shot detection for video sequences), and object-based video indexing. These conventional analysis methods are supplemented by results on semantic multimedia analysis, including three detailed chapters on the development and use of knowledge models for automatic multimedia analysis. Starting from object-based indexing and continuing with machine learning, these three chapters are very logically organized. Because of the diversity of this research field, including several chapters of recent research results is not sufficient to cover the state of the art of multimedia. The editors of the book should write an introductory chapter about multimedia content analysis approaches, basic problems, and technical issues and challenges, and try to survey the state of the art of the field and thus introduce the field to the reader.
  4. Miles, A.; Matthews, B.; Beckett, D.; Brickley, D.; Wilson, M.; Rogers, N.: SKOS: A language to describe simple knowledge structures for the web (2005) 0.02
    0.019750334 = product of:
      0.059251003 = sum of:
        0.031243779 = weight(_text_:searching in 517) [ClassicSimilarity], result of:
          0.031243779 = score(doc=517,freq=4.0), product of:
            0.14122958 = queryWeight, product of:
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.03491209 = queryNorm
            0.22122687 = fieldWeight in 517, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.02734375 = fieldNorm(doc=517)
        0.028007222 = product of:
          0.056014445 = sum of:
            0.056014445 = weight(_text_:etc in 517) [ClassicSimilarity], result of:
              0.056014445 = score(doc=517,freq=4.0), product of:
                0.18910104 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.03491209 = queryNorm
                0.29621437 = fieldWeight in 517, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=517)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Content
    "Textual content-based search engines for the web have a number of limitations. Firstly, many web resources have little or no textual content (images, audio or video streams etc.) Secondly, precision is low where natural language terms have overloaded meaning (e.g. 'bank', 'watch', 'chip' etc.) Thirdly, recall is incomplete where the search does not take account of synonyms or quasi-synonyms. Fourthly, there is no basis for assisting a user in modifying (expanding, refining, translating) a search based on the meaning of the original search. Fifthly, there is no basis for searching across natural languages, or framing search queries in terms of symbolic languages. The Semantic Web is a framework for creating, managing, publishing and searching semantically rich metadata for web resources. Annotating web resources with precise and meaningful statements about conceptual aspects of their content provides a basis for overcoming all of the limitations of textual content-based search engines listed above. Creating this type of metadata requires that metadata generators are able to refer to shared repositories of meaning: 'vocabularies' of concepts that are common to a community, and describe the domain of interest for that community.
  5. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.02
    0.0173545 = product of:
      0.0520635 = sum of:
        0.03787318 = weight(_text_:searching in 4649) [ClassicSimilarity], result of:
          0.03787318 = score(doc=4649,freq=2.0), product of:
            0.14122958 = queryWeight, product of:
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.03491209 = queryNorm
            0.26816747 = fieldWeight in 4649, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.046875 = fieldNorm(doc=4649)
        0.014190319 = product of:
          0.028380638 = sum of:
            0.028380638 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
              0.028380638 = score(doc=4649,freq=2.0), product of:
                0.1222562 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03491209 = queryNorm
                0.23214069 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    More and more cultural heritage institutions publish their collections, vocabularies and metadata on the Web. The resulting Web of linked cultural data opens up exciting new possibilities for searching and browsing through these cultural heritage collections. We report on ongoing work in which we investigate the estimation of relevance in this Web of Culture. We study existing measures of semantic distance and how they apply to two use cases. The use cases relate to the structured, multilingual and multimodal nature of the Culture Web. We distinguish between measures using the Web, such as Google distance and PMI, and measures using the Linked Data Web, i.e. the semantic structure of metadata vocabularies. We perform a small study in which we compare these semantic distance measures to human judgements of relevance. Although it is too early to draw any definitive conclusions, the study provides new insights into the applicability of semantic distance measures to the Web of Culture, and clear starting points for further research.
    Date
    26.12.2011 13:40:22
  6. Subirats, I.; Prasad, A.R.D.; Keizer, J.; Bagdanov, A.: Implementation of rich metadata formats and demantic tools using DSpace (2008) 0.01
    0.011569667 = product of:
      0.034709 = sum of:
        0.025248786 = weight(_text_:searching in 2656) [ClassicSimilarity], result of:
          0.025248786 = score(doc=2656,freq=2.0), product of:
            0.14122958 = queryWeight, product of:
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.03491209 = queryNorm
            0.1787783 = fieldWeight in 2656, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.03125 = fieldNorm(doc=2656)
        0.009460213 = product of:
          0.018920425 = sum of:
            0.018920425 = weight(_text_:22 in 2656) [ClassicSimilarity], result of:
              0.018920425 = score(doc=2656,freq=2.0), product of:
                0.1222562 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03491209 = queryNorm
                0.15476047 = fieldWeight in 2656, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2656)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    This poster explores the customization of DSpace to allow the use of the AGRIS Application Profile metadata standard and the AGROVOC thesaurus. The objective is the adaptation of DSpace, through the least invasive code changes either in the form of plug-ins or add-ons, to the specific needs of the Agricultural Sciences and Technology community. Metadata standards such as AGRIS AP, and Knowledge Organization Systems such as the AGROVOC thesaurus, provide mechanisms for sharing information in a standardized manner by recommending the use of common semantics and interoperable syntax (Subirats et al., 2007). AGRIS AP was created to enhance the description, exchange and subsequent retrieval of agricultural Document-like Information Objects (DLIOs). It is a metadata schema which draws from Metadata standards such as Dublin Core (DC), the Australian Government Locator Service Metadata (AGLS) and the Agricultural Metadata Element Set (AgMES) namespaces. It allows sharing of information across dispersed bibliographic systems (FAO, 2005). AGROVOC68 is a multilingual structured thesaurus covering agricultural and related domains. Its main role is to standardize the indexing process in order to make searching simpler and more efficient. AGROVOC is developed by FAO (Lauser et al., 2006). The customization of the DSpace is taking place in several phases. First, the AGRIS AP metadata schema was mapped onto the metadata DSpace model, with several enhancements implemented to support AGRIS AP elements. Next, AGROVOC will be integrated as a controlled vocabulary accessed through a local SKOS or OWL file. Eventually the system will be configurable to access AGROVOC through local files or remotely via webservices. Finally, spell checking and tooltips will be incorporated in the user interface to support metadata editing. Adapting DSpace to support AGRIS AP and annotation using the semantically-rich AGROVOC thesaurus transform DSpace into a powerful, domain-specific system for annotation and exchange of bibliographic metadata in the agricultural domain.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  7. Shaw, R.; Buckland, M.: Open identification and linking of the four Ws (2008) 0.01
    0.010123459 = product of:
      0.030370375 = sum of:
        0.022092689 = weight(_text_:searching in 2665) [ClassicSimilarity], result of:
          0.022092689 = score(doc=2665,freq=2.0), product of:
            0.14122958 = queryWeight, product of:
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.03491209 = queryNorm
            0.15643102 = fieldWeight in 2665, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2665)
        0.008277686 = product of:
          0.016555373 = sum of:
            0.016555373 = weight(_text_:22 in 2665) [ClassicSimilarity], result of:
              0.016555373 = score(doc=2665,freq=2.0), product of:
                0.1222562 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03491209 = queryNorm
                0.1354154 = fieldWeight in 2665, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2665)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Platforms for social computing connect users via shared references to people with whom they have relationships, events attended, places lived in or traveled to, and topics such as favorite books or movies. Since free text is insufficient for expressing such references precisely and unambiguously, many social computing platforms coin identifiers for topics, places, events, and people and provide interfaces for finding and selecting these identifiers from controlled lists. Using these interfaces, users collaboratively construct a web of links among entities. This model needn't be limited to social networking sites. Understanding an item in a digital library or museum requires context: information about the topics, places, events, and people to which the item is related. Students, journalists and investigators traditionally discover this kind of context by asking "the four Ws": what, where, when and who. The DCMI Kernel Metadata Community has recognized the four Ws as fundamental elements of descriptions (Kunze & Turner, 2007). Making better use of metadata to answer these questions via links to appropriate contextual resources has been our focus in a series of research projects over the past few years. Currently we are building a system for enabling readers of any text to relate any topic, place, event or person mentioned in the text to the best explanatory resources available. This system is being developed with two different corpora: a diverse variety of biographical texts characterized by very rich and dense mentions of people, events, places and activities, and a large collection of newly-scanned books, journals and manuscripts relating to Irish culture and history. Like a social computing platform, our system consists of tools for referring to topics, places, events or people, disambiguating these references by linking them to unique identifiers, and using the disambiguated references to provide useful information in context and to link to related resources. Yet current social computing platforms, while usually amenable to importing and exporting data, tend to mint proprietary identifiers and expect links to be traversed using their own interfaces. We take a different approach, using identifiers from both established and emerging naming authorities, representing relationships using standardized metadata vocabularies, and publishing those representations using standard protocols so that links can be stored and traversed anywhere. Central to our strategy is to move from appearances in a text to naming authorities to the the construction of links for searching or querying trusted resources. Using identifiers from naming authorities, rather than literal values (as in the DCMI Kernel) or keys from a proprietary database, makes it more likely that links constructed using our system will continue to be useful in the future. WorldCat Identities URIs (http://worldcat.org/identities/) linked to Library of Congress and Deutsche Nationalbibliothek authority files for persons and organizations and Geonames (http://geonames.org/) URIs for places are stable identifiers attached to a wealth of useful metadata. Yet no naming authority can be totally comprehensive, so our system can be extended to use new sources of identifiers as needed. For example, we are experimenting with using Freebase (http://freebase.com/) URIs to identify historical events, for which no established naming authority currently exists. Stable identifiers (URIs), standardized hyperlinked data formats (XML), and uniform publishing protocols (HTTP) are key ingredients of the web's open architecture. Our system provides an example of how this open architecture can be exploited to build flexible and useful tools for connecting resources via shared references to topics, places, events, and people.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  8. McGuinness, D.L.: Ontologies come of age (2003) 0.01
    0.008167073 = product of:
      0.049002435 = sum of:
        0.049002435 = product of:
          0.09800487 = sum of:
            0.09800487 = weight(_text_:etc in 3084) [ClassicSimilarity], result of:
              0.09800487 = score(doc=3084,freq=6.0), product of:
                0.18910104 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.03491209 = queryNorm
                0.5182672 = fieldWeight in 3084, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3084)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    Ontologies have moved beyond the domains of library science, philosophy, and knowledge representation. They are now the concerns of marketing departments, CEOs, and mainstream business. Research analyst companies such as Forrester Research report on the critical roles of ontologies in support of browsing and search for e-commerce and in support of interoperability for facilitation of knowledge management and configuration. One now sees ontologies used as central controlled vocabularies that are integrated into catalogues, databases, web publications, knowledge management applications, etc. Large ontologies are essential components in many online applications including search (such as Yahoo and Lycos), e-commerce (such as Amazon and eBay), configuration (such as Dell and PC-Order), etc. One also sees ontologies that have long life spans, sometimes in multiple projects (such as UMLS, SIC codes, etc.). Such diverse usage generates many implications for ontology environments. In this paper, we will discuss ontologies and requirements in their current instantiations on the web today. We will describe some desirable properties of ontologies. We will also discuss how both simple and complex ontologies are being and may be used to support varied applications. We will conclude with a discussion of emerging trends in ontologies and their environments and briefly mention our evolving ontology evolution environment.
  9. Severiens, T.; Thiemann, C.: RDF database for PhysNet and similar portals (2006) 0.01
    0.0075444183 = product of:
      0.04526651 = sum of:
        0.04526651 = product of:
          0.09053302 = sum of:
            0.09053302 = weight(_text_:etc in 245) [ClassicSimilarity], result of:
              0.09053302 = score(doc=245,freq=2.0), product of:
                0.18910104 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.03491209 = queryNorm
                0.47875473 = fieldWeight in 245, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0625 = fieldNorm(doc=245)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    PhysNet (www.physnet.net) is a portal for Physics run since 1995 and continuously being developed; it today uses an OWLLite ontology and mySQL database for storing triples with the facts, such as department information, postal addresses, GPS coordinates, URLs of publication repositories, etc. The article focuses on the structure and the development of the underlying ontology; it also gives a detailed overview of an online web-based editorial tool, to maintain the facts database.
  10. Breslin, J.G.: Social semantic information spaces (2009) 0.01
    0.006668387 = product of:
      0.040010322 = sum of:
        0.040010322 = product of:
          0.080020644 = sum of:
            0.080020644 = weight(_text_:etc in 3377) [ClassicSimilarity], result of:
              0.080020644 = score(doc=3377,freq=4.0), product of:
                0.18910104 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.03491209 = queryNorm
                0.4231634 = fieldWeight in 3377, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3377)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    The structural and syntactic web put in place in the early 90s is still much the same as what we use today: resources (web pages, files, etc.) connected by untyped hyperlinks. By untyped, we mean that there is no easy way for a computer to figure out what a link between two pages means - for example, on the W3C website, there are hundreds of links to the various organisations that are registered members of the association, but there is nothing explicitly saying that the link is to an organisation that is a "member of" the W3C or what type of organisation is represented by the link. On John's work page, he links to many papers he has written, but it does not explicitly say that he is the author of those papers or that he wrote such-and-such when he was working at a particular university. In fact, the Web was envisaged to be much more, as one can see from the image in Fig. 1 which is taken from Tim Berners Lee's original outline for the Web in 1989, entitled "Information Management: A Proposal". In this, all the resources are connected by links describing the type of relationships, e.g. "wrote", "describe", "refers to", etc. This is a precursor to the Semantic Web which we will come back to later.
  11. Wang, H.; Liu, Q.; Penin, T.; Fu, L.; Zhang, L.; Tran, T.; Yu, Y.; Pan, Y.: Semplore: a scalable IR approach to search the Web of Data (2009) 0.01
    0.0063121966 = product of:
      0.03787318 = sum of:
        0.03787318 = weight(_text_:searching in 1638) [ClassicSimilarity], result of:
          0.03787318 = score(doc=1638,freq=2.0), product of:
            0.14122958 = queryWeight, product of:
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.03491209 = queryNorm
            0.26816747 = fieldWeight in 1638, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.046875 = fieldNorm(doc=1638)
      0.16666667 = coord(1/6)
    
    Abstract
    The Web of Data keeps growing rapidly. However, the full exploitation of this large amount of structured data faces numerous challenges like usability, scalability, imprecise information needs and data change. We present Semplore, an IR-based system that aims at addressing these issues. Semplore supports intuitive faceted search and complex queries both on text and structured data. It combines imprecise keyword search and precise structured query in a unified ranking scheme. Scalable query processing is supported by leveraging inverted indexes traditionally used in IR systems. This is combined with a novel block-based index structure to support efficient index update when data changes. The experimental results show that Semplore is an efficient and effective system for searching the Web of Data and can be used as a basic infrastructure for Web-scale Semantic Web search engines.
  12. Binding, C.; Tudhope, D.: Terminology Web services (2010) 0.01
    0.0063121966 = product of:
      0.03787318 = sum of:
        0.03787318 = weight(_text_:searching in 4067) [ClassicSimilarity], result of:
          0.03787318 = score(doc=4067,freq=2.0), product of:
            0.14122958 = queryWeight, product of:
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.03491209 = queryNorm
            0.26816747 = fieldWeight in 4067, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.046875 = fieldNorm(doc=4067)
      0.16666667 = coord(1/6)
    
    Abstract
    Controlled terminologies such as classification schemes, name authorities, and thesauri have long been the domain of the library and information science community. Although historically there have been initiatives towards library style classification of web resources, there remain significant problems with searching and quality judgement of online content. Terminology services can play a key role in opening up access to these valuable resources. By exposing controlled terminologies via a web service, organisations maintain data integrity and version control, whilst motivating external users to design innovative ways to present and utilise their data. We introduce terminology web services and review work in the area. We describe the approaches taken in establishing application programming interfaces (API) and discuss the comparative benefits of a dedicated terminology web service versus general purpose programming languages. We discuss experiences at Glamorgan in creating terminology web services and associated client interface components, in particular for the archaeology domain in the STAR (Semantic Technologies for Archaeological Resources) Project.
  13. Davies, J.; Weeks, R.; Krohn, U.: QuizRDF: search technology for the Semantic Web (2004) 0.01
    0.0063121966 = product of:
      0.03787318 = sum of:
        0.03787318 = weight(_text_:searching in 4316) [ClassicSimilarity], result of:
          0.03787318 = score(doc=4316,freq=2.0), product of:
            0.14122958 = queryWeight, product of:
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.03491209 = queryNorm
            0.26816747 = fieldWeight in 4316, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.046875 = fieldNorm(doc=4316)
      0.16666667 = coord(1/6)
    
    Abstract
    An information-seeking system is described which combines traditional keyword querying of WWW resources with the ability to browse and query against RDF annotations of those resources. RDF(S) and RDF are used to specify and populate an ontology and the resultant RDF annotations are then indexed along with the full text of the annotated resources. The resultant index allows both keyword querying against the full text of the document and the literal values occurring in the RDF annotations, along with the ability to browse and query the ontology. We motivate our approach as a key enabler for fully exploiting the Semantic Web in the area of knowledge management and argue that the ability to combine searching and browsing behaviours more fully supports a typical information-seeking task. The approach is characterised as "low threshold, high ceiling" in the sense that where RDF annotations exist they are exploited for an improved information-seeking experience but where they do not yet exist, a search capability is still available.
  14. Radhakrishnan, A.: Swoogle : an engine for the Semantic Web (2007) 0.01
    0.005951196 = product of:
      0.035707176 = sum of:
        0.035707176 = weight(_text_:searching in 4709) [ClassicSimilarity], result of:
          0.035707176 = score(doc=4709,freq=4.0), product of:
            0.14122958 = queryWeight, product of:
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.03491209 = queryNorm
            0.2528307 = fieldWeight in 4709, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.03125 = fieldNorm(doc=4709)
      0.16666667 = coord(1/6)
    
    Content
    "Swoogle, the Semantic web search engine, is a research project carried out by the ebiquity research group in the Computer Science and Electrical Engineering Department at the University of Maryland. It's an engine tailored towards finding documents on the semantic web. The whole research paper is available here. Semantic web is touted as the next generation of online content representation where the web documents are represented in a language that is not only easy for humans but is machine readable (easing the integration of data as never thought possible) as well. And the main elements of the semantic web include data model description formats such as Resource Description Framework (RDF), a variety of data interchange formats (e.g. RDF/XML, Turtle, N-Triples), and notations such as RDF Schema (RDFS), the Web Ontology Language (OWL), all of which are intended to provide a formal description of concepts, terms, and relationships within a given knowledge domain (Wikipedia). And Swoogle is an attempt to mine and index this new set of web documents. The engine performs crawling of semantic documents like most web search engines and the search is available as web service too. The engine is primarily written in Java with the PHP used for the front-end and MySQL for database. Swoogle is capable of searching over 10,000 ontologies and indexes more that 1.3 million web documents. It also computes the importance of a Semantic Web document. The techniques used for indexing are the more google-type page ranking and also mining the documents for inter-relationships that are the basis for the semantic web. For more information on how the RDF framework can be used to relate documents, read the link here. Being a research project, and with a non-commercial motive, there is not much hype around Swoogle. However, the approach to indexing of Semantic web documents is an approach that most engines will have to take at some point of time. When the Internet debuted, there were no specific engines available for indexing or searching. The Search domain only picked up as more and more content became available. One fundamental question that I've always wondered about it is - provided that the search engines return very relevant results for a query - how to ascertain that the documents are indeed the most relevant ones available. There is always an inherent delay in indexing of document. Its here that the new semantic documents search engines can close delay. Experimenting with the concept of Search in the semantic web can only bore well for the future of search technology."
  15. Suchanek, F.M.; Kasneci, G.; Weikum, G.: YAGO: a core of semantic knowledge unifying WordNet and Wikipedia (2007) 0.01
    0.0056583136 = product of:
      0.03394988 = sum of:
        0.03394988 = product of:
          0.06789976 = sum of:
            0.06789976 = weight(_text_:etc in 3403) [ClassicSimilarity], result of:
              0.06789976 = score(doc=3403,freq=2.0), product of:
                0.18910104 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.03491209 = queryNorm
                0.35906604 = fieldWeight in 3403, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3403)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    We present YAGO, a light-weight and extensible ontology with high coverage and quality. YAGO builds on entities and relations and currently contains more than 1 million entities and 5 million facts. This includes the Is-A hierarchy as well as non-taxonomic relations between entities (such as hasWonPrize). The facts have been automatically extracted from Wikipedia and unified with WordNet, using a carefully designed combination of rule-based and heuristic methods described in this paper. The resulting knowledge base is a major step beyond WordNet: in quality by adding knowledge about individuals like persons, organizations, products, etc. with their semantic relationships - and in quantity by increasing the number of facts by more than an order of magnitude. Our empirical evaluation of fact correctness shows an accuracy of about 95%. YAGO is based on a logically clean model, which is decidable, extensible, and compatible with RDFS. Finally, we show how YAGO can be further extended by state-of-the-art information extraction techniques.
  16. Dextre Clarke, S.G.: Challenges and opportunities for KOS standards (2007) 0.01
    0.005518458 = product of:
      0.033110745 = sum of:
        0.033110745 = product of:
          0.06622149 = sum of:
            0.06622149 = weight(_text_:22 in 4643) [ClassicSimilarity], result of:
              0.06622149 = score(doc=4643,freq=2.0), product of:
                0.1222562 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03491209 = queryNorm
                0.5416616 = fieldWeight in 4643, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4643)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    22. 9.2007 15:41:14
  17. Krause, J.: Shell Model, Semantic Web and Web Information Retrieval (2006) 0.01
    0.005260164 = product of:
      0.031560984 = sum of:
        0.031560984 = weight(_text_:searching in 6061) [ClassicSimilarity], result of:
          0.031560984 = score(doc=6061,freq=2.0), product of:
            0.14122958 = queryWeight, product of:
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.03491209 = queryNorm
            0.22347288 = fieldWeight in 6061, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6061)
      0.16666667 = coord(1/6)
    
    Abstract
    The middle of the 1990s are coined by the increased enthusiasm for the possibilities of the WWW, which has only recently deviated - at least in relation to scientific information - for the differentiated measuring of its advantages and disadvantages. Web Information Retrieval originated as a specialized discipline with great commercial significance (for an overview see Lewandowski 2005). Besides the new technological structure that enables the indexing and searching (in seconds) of unimaginable amounts of data worldwide, new assessment processes for the ranking of search results are being developed, which use the link structures of the Web. They are the main innovation with respect to the traditional "mother discipline" of Information Retrieval. From the beginning, link structures of Web pages are applied to commercial search engines in a wide array of variations. From the perspective of scientific information, link topology based approaches were in essence trying to solve a self-created problem: on the one hand, it quickly became clear that the openness of the Web led to an up-tonow unknown increase in available information, but this also caused the quality of the Web pages searched to become a problem - and with it the relevance of the results. The gatekeeper function of traditional information providers, which narrows down every user query to focus on high-quality sources was lacking. Therefore, the recognition of the "authoritativeness" of the Web pages by general search engines such as Google was one of the most important factors for their success.
  18. Davies, J.; Weeks, R.: QuizRDF: search technology for the Semantic Web (2004) 0.01
    0.005260164 = product of:
      0.031560984 = sum of:
        0.031560984 = weight(_text_:searching in 4320) [ClassicSimilarity], result of:
          0.031560984 = score(doc=4320,freq=2.0), product of:
            0.14122958 = queryWeight, product of:
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.03491209 = queryNorm
            0.22347288 = fieldWeight in 4320, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4320)
      0.16666667 = coord(1/6)
    
    Abstract
    An information-seeking system is described which combines traditional keyword querying of WWW resources with the ability to browse and query against RD annotations of those resources. RDF(S) and RDF are used to specify and populate an ontology and the resultant RDF annotations are then indexed along with the full text of the annotated resources. The resultant index allows both keyword querying against the full text of the document and the literal values occurring in the RDF annotations, along with the ability to browse and query the ontology. We motivate our approach as a key enabler for fully exploiting the Semantic Web in the area of knowledge management and argue that the ability to combine searching and browsing behaviours more fully supports a typical information-seeking task. The approach is characterised as "low threshold, high ceiling" in the sense that where RDF annotations exist they are exploited for an improved information-seeking experience but where they do not yet exist, a search capability is still available.
  19. Zhang, L.; Liu, Q.L.; Zhang, J.; Wang, H.F.; Pan, Y.; Yu, Y.: Semplore: an IR approach to scalable hybrid query of Semantic Web data (2007) 0.01
    0.005260164 = product of:
      0.031560984 = sum of:
        0.031560984 = weight(_text_:searching in 231) [ClassicSimilarity], result of:
          0.031560984 = score(doc=231,freq=2.0), product of:
            0.14122958 = queryWeight, product of:
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.03491209 = queryNorm
            0.22347288 = fieldWeight in 231, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.0390625 = fieldNorm(doc=231)
      0.16666667 = coord(1/6)
    
    Abstract
    As an extension to the current Web, Semantic Web will not only contain structured data with machine understandable semantics but also textual information. While structured queries can be used to find information more precisely on the Semantic Web, keyword searches are still needed to help exploit textual information. It thus becomes very important that we can combine precise structured queries with imprecise keyword searches to have a hybrid query capability. In addition, due to the huge volume of information on the Semantic Web, the hybrid query must be processed in a very scalable way. In this paper, we define such a hybrid query capability that combines unary tree-shaped structured queries with keyword searches. We show how existing information retrieval (IR) index structures and functions can be reused to index semantic web data and its textual information, and how the hybrid query is evaluated on the index structure using IR engines in an efficient and scalable manner. We implemented this IR approach in an engine called Semplore. Comprehensive experiments on its performance show that it is a promising approach. It leads us to believe that it may be possible to evolve current web search engines to query and search the Semantic Web. Finally, we briefy describe how Semplore is used for searching Wikipedia and an IBM customer's product information.
  20. Allocca, C.; Aquin, M.d'; Motta, E.: Impact of using relationships between ontologies to enhance the ontology search results (2012) 0.01
    0.005260164 = product of:
      0.031560984 = sum of:
        0.031560984 = weight(_text_:searching in 264) [ClassicSimilarity], result of:
          0.031560984 = score(doc=264,freq=2.0), product of:
            0.14122958 = queryWeight, product of:
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.03491209 = queryNorm
            0.22347288 = fieldWeight in 264, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.0390625 = fieldNorm(doc=264)
      0.16666667 = coord(1/6)
    
    Abstract
    Using semantic web search engines, such as Watson, Swoogle or Sindice, to find ontologies is a complex exploratory activity. It generally requires formulating multiple queries, browsing pages of results, and assessing the returned ontologies against each other to obtain a relevant and adequate subset of ontologies for the intended use. Our hypothesis is that at least some of the difficulties related to searching ontologies stem from the lack of structure in the search results, where ontologies that are implicitly related to each other are presented as disconnected and shown on different result pages. In earlier publications we presented a software framework, Kannel, which is able to automatically detect and make explicit relationships between ontologies in large ontology repositories. In this paper, we present a study that compares the use of the Watson ontology search engine with an extension,Watson+Kannel, which provides information regarding the various relationships occurring between the result ontologies. We evaluate Watson+Kannel by demonstrating through various indicators that explicit relationships between ontologies improve users' efficiency in ontology search, thus validating our hypothesis.

Years

Languages

  • e 55
  • d 7

Types

  • a 34
  • el 20
  • m 12
  • s 4
  • x 2
  • n 1
  • More… Less…