Search (21 results, page 1 of 2)

  • × theme_ss:"Semantic Web"
  • × theme_ss:"Wissensrepräsentation"
  1. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.08
    0.076330185 = product of:
      0.19082546 = sum of:
        0.047706366 = product of:
          0.1431191 = sum of:
            0.1431191 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.1431191 = score(doc=701,freq=2.0), product of:
                0.38197818 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.045055166 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.1431191 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.1431191 = score(doc=701,freq=2.0), product of:
            0.38197818 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.045055166 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.4 = coord(2/5)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  2. Ehlen, D.: Semantic Wiki : Konzeption eines Semantic MediaWiki für das Reallexikon zur Deutschen Kunstgeschichte (2010) 0.02
    0.021916403 = product of:
      0.10958201 = sum of:
        0.10958201 = weight(_text_:line in 3689) [ClassicSimilarity], result of:
          0.10958201 = score(doc=3689,freq=2.0), product of:
            0.25266227 = queryWeight, product of:
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.045055166 = queryNorm
            0.4337094 = fieldWeight in 3689, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3689)
      0.2 = coord(1/5)
    
    Abstract
    Wikis sind ein geeignetes Mittel zur Umsetzung von umfangreichen Wissenssammlungen wie Lexika oder Enzyklopädien. Bestes Beispiel dafür bildet die weltweit erfolgreiche freie On-line-Enzyklopadie Wikipedia. Jedoch ist es mit konventionellen Wiki-Umgebungen nicht moglich das Potential der gespeicherten Texte vollends auszuschopfen. Eine neue Möglichkeit bieten semantische Wikis, deren Inhalte mithilfe von maschinenlesbaren Annotationen semantische Bezüge erhalten. Die hier vorliegende Bachelorarbeit greift dies auf und überführt Teile des "Reallexikons zur deutschen Kunstgeschichte" in ein semantisches Wiki. Aufgrund einer Semantic MediaWiki-Installation soll uberpruft werden, inwieweit die neue Technik fur die Erschließung des Lexikons genutzt werden kann. Mit einem Beispiel-Wiki für das RdK auf beigefügter CD.
  3. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.02
    0.017100533 = product of:
      0.08550266 = sum of:
        0.08550266 = sum of:
          0.048876543 = weight(_text_:searching in 4649) [ClassicSimilarity], result of:
            0.048876543 = score(doc=4649,freq=2.0), product of:
              0.18226127 = queryWeight, product of:
                4.0452914 = idf(docFreq=2103, maxDocs=44218)
                0.045055166 = queryNorm
              0.26816747 = fieldWeight in 4649, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.0452914 = idf(docFreq=2103, maxDocs=44218)
                0.046875 = fieldNorm(doc=4649)
          0.03662612 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
            0.03662612 = score(doc=4649,freq=2.0), product of:
              0.15777552 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045055166 = queryNorm
              0.23214069 = fieldWeight in 4649, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4649)
      0.2 = coord(1/5)
    
    Abstract
    More and more cultural heritage institutions publish their collections, vocabularies and metadata on the Web. The resulting Web of linked cultural data opens up exciting new possibilities for searching and browsing through these cultural heritage collections. We report on ongoing work in which we investigate the estimation of relevance in this Web of Culture. We study existing measures of semantic distance and how they apply to two use cases. The use cases relate to the structured, multilingual and multimodal nature of the Culture Web. We distinguish between measures using the Web, such as Google distance and PMI, and measures using the Linked Data Web, i.e. the semantic structure of metadata vocabularies. We perform a small study in which we compare these semantic distance measures to human judgements of relevance. Although it is too early to draw any definitive conclusions, the study provides new insights into the applicability of semantic distance measures to the Web of Culture, and clear starting points for further research.
    Date
    26.12.2011 13:40:22
  4. Zhang, L.: Linking information through function (2014) 0.01
    0.009053354 = product of:
      0.04526677 = sum of:
        0.04526677 = weight(_text_:bibliographic in 1526) [ClassicSimilarity], result of:
          0.04526677 = score(doc=1526,freq=2.0), product of:
            0.17540175 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.045055166 = queryNorm
            0.2580748 = fieldWeight in 1526, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=1526)
      0.2 = coord(1/5)
    
    Abstract
    How information resources can be meaningfully related has been addressed in contexts from bibliographic entries to hyperlinks and, more recently, linked data. The genre structure and relationships among genre structure constituents shed new light on organizing information by purpose or function. This study examines the relationships among a set of functional units previously constructed in a taxonomy, each of which is a chunk of information embedded in a document and is distinct in terms of its communicative function. Through a card-sort study, relationships among functional units were identified with regard to their occurrence and function. The findings suggest that a group of functional units can be identified, collocated, and navigated by particular relationships. Understanding how functional units are related to each other is significant in linking information pieces in documents to support finding, aggregating, and navigating information in a distributed information environment.
  5. Djioua, B.; Desclés, J.-P.; Alrahabi, M.: Searching and mining with semantic categories (2012) 0.01
    0.0070547215 = product of:
      0.035273608 = sum of:
        0.035273608 = product of:
          0.070547216 = sum of:
            0.070547216 = weight(_text_:searching in 99) [ClassicSimilarity], result of:
              0.070547216 = score(doc=99,freq=6.0), product of:
                0.18226127 = queryWeight, product of:
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.045055166 = queryNorm
                0.38706642 = fieldWeight in 99, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=99)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    A new model is proposed to retrieve information by building automatically a semantic metatext structure for texts that allow searching and extracting discourse and semantic information according to certain linguistic categorizations. This paper presents approaches for searching and mining full text with semantic categories. The model is built up from two engines: The first one, called EXCOM (Djioua et al., 2006; Alrahabi, 2010), is an automatic system for text annotation, related to discourse and semantic maps, which are specification of general linguistic ontologies founded on the Applicative and Cognitive Grammar. The annotation layer uses a linguistic method called Contextual Exploration, which handles the polysemic values of a term in texts. Several 'semantic maps' underlying 'point of views' for text mining guide this automatic annotation process. The second engine uses semantic annotated texts, produced previously in order to create a semantic inverted index, which is able to retrieve relevant documents for queries associated with discourse and semantic categories such as definition, quotation, causality, relations between concepts, etc. (Djioua & Desclés, 2007). This semantic indexation process builds a metatext layer for textual contents. Some data and linguistic rules sets as well as the general architecture that extend third-party software are expressed as supplementary information.
  6. Baker, T.; Bermès, E.; Coyle, K.; Dunsire, G.; Isaac, A.; Murray, P.; Panzer, M.; Schneider, J.; Singer, R.; Summers, E.; Waites, W.; Young, J.; Zeng, M.: Library Linked Data Incubator Group Final Report (2011) 0.01
    0.0060355696 = product of:
      0.030177847 = sum of:
        0.030177847 = weight(_text_:bibliographic in 4796) [ClassicSimilarity], result of:
          0.030177847 = score(doc=4796,freq=2.0), product of:
            0.17540175 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.045055166 = queryNorm
            0.17204987 = fieldWeight in 4796, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03125 = fieldNorm(doc=4796)
      0.2 = coord(1/5)
    
    Abstract
    The mission of the W3C Library Linked Data Incubator Group, chartered from May 2010 through August 2011, has been "to help increase global interoperability of library data on the Web, by bringing together people involved in Semantic Web activities - focusing on Linked Data - in the library community and beyond, building on existing initiatives, and identifying collaboration tracks for the future." In Linked Data [LINKEDDATA], data is expressed using standards such as Resource Description Framework (RDF) [RDF], which specifies relationships between things, and Uniform Resource Identifiers (URIs, or "Web addresses") [URI]. This final report of the Incubator Group examines how Semantic Web standards and Linked Data principles can be used to make the valuable information assets that library create and curate - resources such as bibliographic data, authorities, and concept schemes - more visible and re-usable outside of their original library context on the wider Web. The Incubator Group began by eliciting reports on relevant activities from parties ranging from small, independent projects to national library initiatives (see the separate report, Library Linked Data Incubator Group: Use Cases) [USECASE]. These use cases provided the starting point for the work summarized in the report: an analysis of the benefits of library Linked Data, a discussion of current issues with regard to traditional library data, existing library Linked Data initiatives, and legal rights over library data; and recommendations for next steps. The report also summarizes the results of a survey of current Linked Data technologies and an inventory of library Linked Data resources available today (see also the more detailed report, Library Linked Data Incubator Group: Datasets, Value Vocabularies, and Metadata Element Sets) [VOCABDATASET].
  7. Wang, H.; Liu, Q.; Penin, T.; Fu, L.; Zhang, L.; Tran, T.; Yu, Y.; Pan, Y.: Semplore: a scalable IR approach to search the Web of Data (2009) 0.00
    0.0048876544 = product of:
      0.024438271 = sum of:
        0.024438271 = product of:
          0.048876543 = sum of:
            0.048876543 = weight(_text_:searching in 1638) [ClassicSimilarity], result of:
              0.048876543 = score(doc=1638,freq=2.0), product of:
                0.18226127 = queryWeight, product of:
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.045055166 = queryNorm
                0.26816747 = fieldWeight in 1638, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1638)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    The Web of Data keeps growing rapidly. However, the full exploitation of this large amount of structured data faces numerous challenges like usability, scalability, imprecise information needs and data change. We present Semplore, an IR-based system that aims at addressing these issues. Semplore supports intuitive faceted search and complex queries both on text and structured data. It combines imprecise keyword search and precise structured query in a unified ranking scheme. Scalable query processing is supported by leveraging inverted indexes traditionally used in IR systems. This is combined with a novel block-based index structure to support efficient index update when data changes. The experimental results show that Semplore is an efficient and effective system for searching the Web of Data and can be used as a basic infrastructure for Web-scale Semantic Web search engines.
  8. Davies, J.; Weeks, R.; Krohn, U.: QuizRDF: search technology for the Semantic Web (2004) 0.00
    0.0048876544 = product of:
      0.024438271 = sum of:
        0.024438271 = product of:
          0.048876543 = sum of:
            0.048876543 = weight(_text_:searching in 4316) [ClassicSimilarity], result of:
              0.048876543 = score(doc=4316,freq=2.0), product of:
                0.18226127 = queryWeight, product of:
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.045055166 = queryNorm
                0.26816747 = fieldWeight in 4316, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4316)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    An information-seeking system is described which combines traditional keyword querying of WWW resources with the ability to browse and query against RDF annotations of those resources. RDF(S) and RDF are used to specify and populate an ontology and the resultant RDF annotations are then indexed along with the full text of the annotated resources. The resultant index allows both keyword querying against the full text of the document and the literal values occurring in the RDF annotations, along with the ability to browse and query the ontology. We motivate our approach as a key enabler for fully exploiting the Semantic Web in the area of knowledge management and argue that the ability to combine searching and browsing behaviours more fully supports a typical information-seeking task. The approach is characterised as "low threshold, high ceiling" in the sense that where RDF annotations exist they are exploited for an improved information-seeking experience but where they do not yet exist, a search capability is still available.
  9. Synak, M.; Dabrowski, M.; Kruk, S.R.: Semantic Web and ontologies (2009) 0.00
    0.0048834826 = product of:
      0.024417413 = sum of:
        0.024417413 = product of:
          0.048834827 = sum of:
            0.048834827 = weight(_text_:22 in 3376) [ClassicSimilarity], result of:
              0.048834827 = score(doc=3376,freq=2.0), product of:
                0.15777552 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045055166 = queryNorm
                0.30952093 = fieldWeight in 3376, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3376)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    31. 7.2010 16:58:22
  10. OWL Web Ontology Language Test Cases (2004) 0.00
    0.0048834826 = product of:
      0.024417413 = sum of:
        0.024417413 = product of:
          0.048834827 = sum of:
            0.048834827 = weight(_text_:22 in 4685) [ClassicSimilarity], result of:
              0.048834827 = score(doc=4685,freq=2.0), product of:
                0.15777552 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045055166 = queryNorm
                0.30952093 = fieldWeight in 4685, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4685)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    14. 8.2011 13:33:22
  11. Mayfield, J.; Finin, T.: Information retrieval on the Semantic Web : integrating inference and retrieval 0.00
    0.004273047 = product of:
      0.021365236 = sum of:
        0.021365236 = product of:
          0.042730473 = sum of:
            0.042730473 = weight(_text_:22 in 4330) [ClassicSimilarity], result of:
              0.042730473 = score(doc=4330,freq=2.0), product of:
                0.15777552 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045055166 = queryNorm
                0.2708308 = fieldWeight in 4330, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4330)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    12. 2.2011 17:35:22
  12. Davies, J.; Weeks, R.: QuizRDF: search technology for the Semantic Web (2004) 0.00
    0.004073045 = product of:
      0.020365225 = sum of:
        0.020365225 = product of:
          0.04073045 = sum of:
            0.04073045 = weight(_text_:searching in 4320) [ClassicSimilarity], result of:
              0.04073045 = score(doc=4320,freq=2.0), product of:
                0.18226127 = queryWeight, product of:
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.045055166 = queryNorm
                0.22347288 = fieldWeight in 4320, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4320)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    An information-seeking system is described which combines traditional keyword querying of WWW resources with the ability to browse and query against RD annotations of those resources. RDF(S) and RDF are used to specify and populate an ontology and the resultant RDF annotations are then indexed along with the full text of the annotated resources. The resultant index allows both keyword querying against the full text of the document and the literal values occurring in the RDF annotations, along with the ability to browse and query the ontology. We motivate our approach as a key enabler for fully exploiting the Semantic Web in the area of knowledge management and argue that the ability to combine searching and browsing behaviours more fully supports a typical information-seeking task. The approach is characterised as "low threshold, high ceiling" in the sense that where RDF annotations exist they are exploited for an improved information-seeking experience but where they do not yet exist, a search capability is still available.
  13. Zhang, L.; Liu, Q.L.; Zhang, J.; Wang, H.F.; Pan, Y.; Yu, Y.: Semplore: an IR approach to scalable hybrid query of Semantic Web data (2007) 0.00
    0.004073045 = product of:
      0.020365225 = sum of:
        0.020365225 = product of:
          0.04073045 = sum of:
            0.04073045 = weight(_text_:searching in 231) [ClassicSimilarity], result of:
              0.04073045 = score(doc=231,freq=2.0), product of:
                0.18226127 = queryWeight, product of:
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.045055166 = queryNorm
                0.22347288 = fieldWeight in 231, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=231)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    As an extension to the current Web, Semantic Web will not only contain structured data with machine understandable semantics but also textual information. While structured queries can be used to find information more precisely on the Semantic Web, keyword searches are still needed to help exploit textual information. It thus becomes very important that we can combine precise structured queries with imprecise keyword searches to have a hybrid query capability. In addition, due to the huge volume of information on the Semantic Web, the hybrid query must be processed in a very scalable way. In this paper, we define such a hybrid query capability that combines unary tree-shaped structured queries with keyword searches. We show how existing information retrieval (IR) index structures and functions can be reused to index semantic web data and its textual information, and how the hybrid query is evaluated on the index structure using IR engines in an efficient and scalable manner. We implemented this IR approach in an engine called Semplore. Comprehensive experiments on its performance show that it is a promising approach. It leads us to believe that it may be possible to evolve current web search engines to query and search the Semantic Web. Finally, we briefy describe how Semplore is used for searching Wikipedia and an IBM customer's product information.
  14. Allocca, C.; Aquin, M.d'; Motta, E.: Impact of using relationships between ontologies to enhance the ontology search results (2012) 0.00
    0.004073045 = product of:
      0.020365225 = sum of:
        0.020365225 = product of:
          0.04073045 = sum of:
            0.04073045 = weight(_text_:searching in 264) [ClassicSimilarity], result of:
              0.04073045 = score(doc=264,freq=2.0), product of:
                0.18226127 = queryWeight, product of:
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.045055166 = queryNorm
                0.22347288 = fieldWeight in 264, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=264)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Using semantic web search engines, such as Watson, Swoogle or Sindice, to find ontologies is a complex exploratory activity. It generally requires formulating multiple queries, browsing pages of results, and assessing the returned ontologies against each other to obtain a relevant and adequate subset of ontologies for the intended use. Our hypothesis is that at least some of the difficulties related to searching ontologies stem from the lack of structure in the search results, where ontologies that are implicitly related to each other are presented as disconnected and shown on different result pages. In earlier publications we presented a software framework, Kannel, which is able to automatically detect and make explicit relationships between ontologies in large ontology repositories. In this paper, we present a study that compares the use of the Watson ontology search engine with an extension,Watson+Kannel, which provides information regarding the various relationships occurring between the result ontologies. We evaluate Watson+Kannel by demonstrating through various indicators that explicit relationships between ontologies improve users' efficiency in ontology search, thus validating our hypothesis.
  15. Miles, A.; Matthews, B.; Beckett, D.; Brickley, D.; Wilson, M.; Rogers, N.: SKOS: A language to describe simple knowledge structures for the web (2005) 0.00
    0.0040321094 = product of:
      0.020160547 = sum of:
        0.020160547 = product of:
          0.040321093 = sum of:
            0.040321093 = weight(_text_:searching in 517) [ClassicSimilarity], result of:
              0.040321093 = score(doc=517,freq=4.0), product of:
                0.18226127 = queryWeight, product of:
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.045055166 = queryNorm
                0.22122687 = fieldWeight in 517, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=517)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Content
    "Textual content-based search engines for the web have a number of limitations. Firstly, many web resources have little or no textual content (images, audio or video streams etc.) Secondly, precision is low where natural language terms have overloaded meaning (e.g. 'bank', 'watch', 'chip' etc.) Thirdly, recall is incomplete where the search does not take account of synonyms or quasi-synonyms. Fourthly, there is no basis for assisting a user in modifying (expanding, refining, translating) a search based on the meaning of the original search. Fifthly, there is no basis for searching across natural languages, or framing search queries in terms of symbolic languages. The Semantic Web is a framework for creating, managing, publishing and searching semantically rich metadata for web resources. Annotating web resources with precise and meaningful statements about conceptual aspects of their content provides a basis for overcoming all of the limitations of textual content-based search engines listed above. Creating this type of metadata requires that metadata generators are able to refer to shared repositories of meaning: 'vocabularies' of concepts that are common to a community, and describe the domain of interest for that community.
  16. Gendt, M. van; Isaac, I.; Meij, L. van der; Schlobach, S.: Semantic Web techniques for multiple views on heterogeneous collections : a case study (2006) 0.00
    0.003662612 = product of:
      0.01831306 = sum of:
        0.01831306 = product of:
          0.03662612 = sum of:
            0.03662612 = weight(_text_:22 in 2418) [ClassicSimilarity], result of:
              0.03662612 = score(doc=2418,freq=2.0), product of:
                0.15777552 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045055166 = queryNorm
                0.23214069 = fieldWeight in 2418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2418)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Research and advanced technology for digital libraries : 10th European conference, proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006
  17. Prud'hommeaux, E.; Gayo, E.: RDF ventures to boldly meet your most pedestrian needs (2015) 0.00
    0.003662612 = product of:
      0.01831306 = sum of:
        0.01831306 = product of:
          0.03662612 = sum of:
            0.03662612 = weight(_text_:22 in 2024) [ClassicSimilarity], result of:
              0.03662612 = score(doc=2024,freq=2.0), product of:
                0.15777552 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045055166 = queryNorm
                0.23214069 = fieldWeight in 2024, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2024)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Bulletin of the Association for Information Science and Technology. 41(2015) no.4, S.18-22
  18. Zeng, M.L.; Fan, W.; Lin, X.: SKOS for an integrated vocabulary structure (2008) 0.00
    0.0034531436 = product of:
      0.017265718 = sum of:
        0.017265718 = product of:
          0.034531437 = sum of:
            0.034531437 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
              0.034531437 = score(doc=2654,freq=4.0), product of:
                0.15777552 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045055166 = queryNorm
                0.21886435 = fieldWeight in 2654, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2654)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    In order to transfer the Chinese Classified Thesaurus (CCT) into a machine-processable format and provide CCT-based Web services, a pilot study has been conducted in which a variety of selected CCT classes and mapped thesaurus entries are encoded with SKOS. OWL and RDFS are also used to encode the same contents for the purposes of feasibility and cost-benefit comparison. CCT is a collected effort led by the National Library of China. It is an integration of the national standards Chinese Library Classification (CLC) 4th edition and Chinese Thesaurus (CT). As a manually created mapping product, CCT provides for each of the classes the corresponding thesaurus terms, and vice versa. The coverage of CCT includes four major clusters: philosophy, social sciences and humanities, natural sciences and technologies, and general works. There are 22 main-classes, 52,992 sub-classes and divisions, 110,837 preferred thesaurus terms, 35,690 entry terms (non-preferred terms), and 59,738 pre-coordinated headings (Chinese Classified Thesaurus, 2005) Major challenges of encoding this large vocabulary comes from its integrated structure. CCT is a result of the combination of two structures (illustrated in Figure 1): a thesaurus that uses ISO-2788 standardized structure and a classification scheme that is basically enumerative, but provides some flexibility for several kinds of synthetic mechanisms Other challenges include the complex relationships caused by differences of granularities of two original schemes and their presentation with various levels of SKOS elements; as well as the diverse coordination of entries due to the use of auxiliary tables and pre-coordinated headings derived from combining classes, subdivisions, and thesaurus terms, which do not correspond to existing unique identifiers. The poster reports the progress, shares the sample SKOS entries, and summarizes problems identified during the SKOS encoding process. Although OWL Lite and OWL Full provide richer expressiveness, the cost-benefit issues and the final purposes of encoding CCT raise questions of using such approaches.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  19. Fernández, M.; Cantador, I.; López, V.; Vallet, D.; Castells, P.; Motta, E.: Semantically enhanced Information Retrieval : an ontology-based approach (2011) 0.00
    0.0032584362 = product of:
      0.01629218 = sum of:
        0.01629218 = product of:
          0.03258436 = sum of:
            0.03258436 = weight(_text_:searching in 230) [ClassicSimilarity], result of:
              0.03258436 = score(doc=230,freq=2.0), product of:
                0.18226127 = queryWeight, product of:
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.045055166 = queryNorm
                0.1787783 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.03125 = fieldNorm(doc=230)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Currently, techniques for content description and query processing in Information Retrieval (IR) are based on keywords, and therefore provide limited capabilities to capture the conceptualizations associated with user needs and contents. Aiming to solve the limitations of keyword-based models, the idea of conceptual search, understood as searching by meanings rather than literal strings, has been the focus of a wide body of research in the IR field. More recently, it has been used as a prototypical scenario (or even envisioned as a potential "killer app") in the Semantic Web (SW) vision, since its emergence in the late nineties. However, current approaches to semantic search developed in the SW area have not yet taken full advantage of the acquired knowledge, accumulated experience, and technological sophistication achieved through several decades of work in the IR field. Starting from this position, this work investigates the definition of an ontology-based IR model, oriented to the exploitation of domain Knowledge Bases to support semantic search capabilities in large document repositories, stressing on the one hand the use of fully fledged ontologies in the semantic-based perspective, and on the other hand the consideration of unstructured content as the target search space. The major contribution of this work is an innovative, comprehensive semantic search model, which extends the classic IR model, addresses the challenges of the massive and heterogeneous Web environment, and integrates the benefits of both keyword and semantic-based search. Additional contributions include: an innovative rank fusion technique that minimizes the undesired effects of knowledge sparseness on the yet juvenile SW, and the creation of a large-scale evaluation benchmark, based on TREC IR evaluation standards, which allows a rigorous comparison between IR and SW approaches. Conducted experiments show that our semantic search model obtained comparable and better performance results (in terms of MAP and P@10 values) than the best TREC automatic system.
  20. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.00
    0.0030521767 = product of:
      0.015260884 = sum of:
        0.015260884 = product of:
          0.030521767 = sum of:
            0.030521767 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
              0.030521767 = score(doc=4553,freq=2.0), product of:
                0.15777552 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045055166 = queryNorm
                0.19345059 = fieldWeight in 4553, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4553)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    16.11.2018 14:22:01