Search (78 results, page 1 of 4)

  • × theme_ss:"Semantic Web"
  1. Malmsten, M.: Making a library catalogue part of the Semantic Web (2008) 0.02
    0.018448254 = product of:
      0.05534476 = sum of:
        0.05534476 = product of:
          0.08301714 = sum of:
            0.041696113 = weight(_text_:29 in 2640) [ClassicSimilarity], result of:
              0.041696113 = score(doc=2640,freq=2.0), product of:
                0.15326229 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043569047 = queryNorm
                0.27205724 = fieldWeight in 2640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2640)
            0.04132103 = weight(_text_:22 in 2640) [ClassicSimilarity], result of:
              0.04132103 = score(doc=2640,freq=2.0), product of:
                0.15257138 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043569047 = queryNorm
                0.2708308 = fieldWeight in 2640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2640)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Date
    20. 2.2009 10:29:39
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  2. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.02
    0.01581279 = product of:
      0.04743837 = sum of:
        0.04743837 = product of:
          0.07115755 = sum of:
            0.035739526 = weight(_text_:29 in 4649) [ClassicSimilarity], result of:
              0.035739526 = score(doc=4649,freq=2.0), product of:
                0.15326229 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043569047 = queryNorm
                0.23319192 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
            0.035418026 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
              0.035418026 = score(doc=4649,freq=2.0), product of:
                0.15257138 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043569047 = queryNorm
                0.23214069 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Date
    29. 7.2011 14:44:56
    26.12.2011 13:40:22
  3. Hooland, S. van; Verborgh, R.; Wilde, M. De; Hercher, J.; Mannens, E.; Wa, R.Van de: Evaluating the success of vocabulary reconciliation for cultural heritage collections (2013) 0.02
    0.01581279 = product of:
      0.04743837 = sum of:
        0.04743837 = product of:
          0.07115755 = sum of:
            0.035739526 = weight(_text_:29 in 662) [ClassicSimilarity], result of:
              0.035739526 = score(doc=662,freq=2.0), product of:
                0.15326229 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043569047 = queryNorm
                0.23319192 = fieldWeight in 662, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=662)
            0.035418026 = weight(_text_:22 in 662) [ClassicSimilarity], result of:
              0.035418026 = score(doc=662,freq=2.0), product of:
                0.15257138 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043569047 = queryNorm
                0.23214069 = fieldWeight in 662, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=662)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Date
    22. 3.2013 19:29:20
  4. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.02
    0.0153776 = product of:
      0.0461328 = sum of:
        0.0461328 = product of:
          0.1383984 = sum of:
            0.1383984 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.1383984 = score(doc=701,freq=2.0), product of:
                0.36937886 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.043569047 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  5. Feigenbaum, L.; Herman, I.; Hongsermeier, T.; Neumann, E.; Stephens, S.: ¬The Semantic Web in action (2007) 0.01
    0.013780861 = product of:
      0.041342583 = sum of:
        0.041342583 = product of:
          0.062013872 = sum of:
            0.038187522 = weight(_text_:network in 3000) [ClassicSimilarity], result of:
              0.038187522 = score(doc=3000,freq=2.0), product of:
                0.19402927 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.043569047 = queryNorm
                0.1968132 = fieldWeight in 3000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3000)
            0.023826351 = weight(_text_:29 in 3000) [ClassicSimilarity], result of:
              0.023826351 = score(doc=3000,freq=2.0), product of:
                0.15326229 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043569047 = queryNorm
                0.15546128 = fieldWeight in 3000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3000)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Six years ago in this magazine, Tim Berners-Lee, James Hendler and Ora Lassila unveiled a nascent vision of the Semantic Web: a highly interconnected network of data that could be easily accessed and understood by any desktop or handheld machine. They painted a future of intelligent software agents that would head out on the World Wide Web and automatically book flights and hotels for our trips, update our medical records and give us a single, customized answer to a particular question without our having to search for information or pore through results. They also presented the young technologies that would make this vision come true: a common language for representing data that could be understood by all kinds of software agents; ontologies--sets of statements--that translate information from disparate databases into common terms; and rules that allow software agents to reason about the information described in those terms. The data format, ontologies and reasoning software would operate like one big application on the World Wide Web, analyzing all the raw data stored in online databases as well as all the data about the text, images, video and communications the Web contained. Like the Web itself, the Semantic Web would grow in a grassroots fashion, only this time aided by working groups within the World Wide Web Consortium, which helps to advance the global medium. Since then skeptics have said the Semantic Web would be too difficult for people to understand or exploit. Not so. The enabling technologies have come of age. A vibrant community of early adopters has agreed on standards that have steadily made the Semantic Web practical to use. Large companies have major projects under way that will greatly improve the efficiencies of in-house operations and of scientific research. Other firms are using the Semantic Web to enhance business-to-business interactions and to build the hidden data-processing structures, or back ends, behind new consumer services. And like an iceberg, the tip of this large body of work is emerging in direct consumer applications, too.
    Date
    31.12.1996 19:29:41
  6. Leskinen, P.; Hyvönen, E.: Extracting genealogical networks of linked data from biographical texts (2019) 0.01
    0.012861087 = product of:
      0.03858326 = sum of:
        0.03858326 = product of:
          0.11574978 = sum of:
            0.11574978 = weight(_text_:network in 5798) [ClassicSimilarity], result of:
              0.11574978 = score(doc=5798,freq=6.0), product of:
                0.19402927 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.043569047 = queryNorm
                0.59655833 = fieldWeight in 5798, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5798)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper presents the idea and our work of extracting and reassembling a genealogical network automatically from a collection of biographies. The network can be used as a tool for network analysis of historical persons. The data has been published as Linked Data and as an interactive online service as part of the in-use data service and semantic portal BiographySampo - Finnish Biographies on the Semantic Web.
  7. Metadata and semantics research : 7th Research Conference, MTSR 2013 Thessaloniki, Greece, November 19-22, 2013. Proceedings (2013) 0.01
    0.011125877 = product of:
      0.03337763 = sum of:
        0.03337763 = product of:
          0.05006644 = sum of:
            0.020848056 = weight(_text_:29 in 1155) [ClassicSimilarity], result of:
              0.020848056 = score(doc=1155,freq=2.0), product of:
                0.15326229 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043569047 = queryNorm
                0.13602862 = fieldWeight in 1155, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1155)
            0.029218383 = weight(_text_:22 in 1155) [ClassicSimilarity], result of:
              0.029218383 = score(doc=1155,freq=4.0), product of:
                0.15257138 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043569047 = queryNorm
                0.19150631 = fieldWeight in 1155, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1155)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    All the papers underwent a thorough and rigorous peer-review process. The review and selection this year was highly competitive and only papers containing significant research results, innovative methods, or novel and best practices were accepted for publication. Only 29 of 89 submissions were accepted as full papers, representing 32.5% of the total number of submissions. Additional contributions covering noteworthy and important results in special tracks or project reports were accepted, totaling 42 accepted contributions. This year's conference included two outstanding keynote speakers. Dr. Stefan Gradmann, a professor arts department of KU Leuven (Belgium) and director of university library, addressed semantic research drawing from his work with Europeana. The title of his presentation was, "Towards a Semantic Research Library: Digital Humanities Research, Europeana and the Linked Data Paradigm". Dr. Michail Salampasis, associate professor from our conference host institution, the Department of Informatics of the Alexander TEI of Thessaloniki, presented new potential, intersecting search and linked data. The title of his talk was, "Rethinking the Search Experience: What Could Professional Search Systems Do Better?"
    Date
    17.12.2013 12:51:22
  8. Dextre Clarke, S.G.: Challenges and opportunities for KOS standards (2007) 0.01
    0.009182452 = product of:
      0.027547356 = sum of:
        0.027547356 = product of:
          0.08264206 = sum of:
            0.08264206 = weight(_text_:22 in 4643) [ClassicSimilarity], result of:
              0.08264206 = score(doc=4643,freq=2.0), product of:
                0.15257138 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043569047 = queryNorm
                0.5416616 = fieldWeight in 4643, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4643)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    22. 9.2007 15:41:14
  9. Broughton, V.: Automatic metadata generation : Digital resource description without human intervention (2007) 0.01
    0.007870673 = product of:
      0.023612019 = sum of:
        0.023612019 = product of:
          0.07083605 = sum of:
            0.07083605 = weight(_text_:22 in 6048) [ClassicSimilarity], result of:
              0.07083605 = score(doc=6048,freq=2.0), product of:
                0.15257138 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043569047 = queryNorm
                0.46428138 = fieldWeight in 6048, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6048)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    22. 9.2007 15:41:14
  10. Tudhope, D.: Knowledge Organization System Services : brief review of NKOS activities and possibility of KOS registries (2007) 0.01
    0.007870673 = product of:
      0.023612019 = sum of:
        0.023612019 = product of:
          0.07083605 = sum of:
            0.07083605 = weight(_text_:22 in 100) [ClassicSimilarity], result of:
              0.07083605 = score(doc=100,freq=2.0), product of:
                0.15257138 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043569047 = queryNorm
                0.46428138 = fieldWeight in 100, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=100)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    22. 9.2007 15:41:14
  11. Campbell, D.G.: Derrida, logocentrism, and the concept of warrant on the Semantic Web (2008) 0.01
    0.007500738 = product of:
      0.022502214 = sum of:
        0.022502214 = product of:
          0.06750664 = sum of:
            0.06750664 = weight(_text_:network in 2507) [ClassicSimilarity], result of:
              0.06750664 = score(doc=2507,freq=4.0), product of:
                0.19402927 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.043569047 = queryNorm
                0.34791988 = fieldWeight in 2507, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2507)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    The highly-structured data standards of the Semantic Web contain a promising venue for the migration of library subject access standards onto the World Wide Web. The new functionalities of the Web, however, along with the anticipated capabilities of intelligent Web agents, suggest that information on the Semantic Web will have much more flexibility, diversity and mutability. We need, therefore, a method for recognizing and assessing the principles whereby Semantic Web information can combine together in productive and useful ways. This paper will argue that the concept of warrant in traditional library science, can provide a useful means of translating library knowledge structures into Web-based knowledge structures. Using Derrida's concept of logocentrism, this paper suggests that what while "warrant" in library science traditionally alludes to the principles by which concepts are admitted into the design of a classification or access system, "warrant" on the Semantic Web alludes to the principles by which Web resources can be admitted into a network of information uses. Furthermore, library information practice suggests a far more complex network of warrant concepts that provide a subtlety and richness to knowledge organization that the Semantic Web has not yet attained.
  12. Guns, R.: Tracing the origins of the semantic web (2013) 0.01
    0.007500738 = product of:
      0.022502214 = sum of:
        0.022502214 = product of:
          0.06750664 = sum of:
            0.06750664 = weight(_text_:network in 1093) [ClassicSimilarity], result of:
              0.06750664 = score(doc=1093,freq=4.0), product of:
                0.19402927 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.043569047 = queryNorm
                0.34791988 = fieldWeight in 1093, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1093)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The Semantic Web has been criticized for not being semantic. This article examines the questions of why and how the Web of Data, expressed in the Resource Description Framework (RDF), has come to be known as the Semantic Web. Contrary to previous papers, we deliberately take a descriptive stance and do not start from preconceived ideas about the nature of semantics. Instead, we mainly base our analysis on early design documents of the (Semantic) Web. The main determining factor is shown to be link typing, coupled with the influence of online metadata. Both factors already were present in early web standards and drafts. Our findings indicate that the Semantic Web is directly linked to older artificial intelligence work, despite occasional claims to the contrary. Because of link typing, the Semantic Web can be considered an example of a semantic network. Originally network representations of the meaning of natural language utterances, semantic networks have eventually come to refer to any networks with typed (usually directed) links. We discuss possible causes for this shift and suggest that it may be due to confounding paradigmatic and syntagmatic semantic relations.
  13. Berners-Lee, T.; Hendler, J.; Lassila, O.: ¬The Semantic Web : a new form of Web content that is meaningful to computers will unleash a revolution of new possibilities (2001) 0.01
    0.006618432 = product of:
      0.019855294 = sum of:
        0.019855294 = product of:
          0.059565883 = sum of:
            0.059565883 = weight(_text_:29 in 376) [ClassicSimilarity], result of:
              0.059565883 = score(doc=376,freq=2.0), product of:
                0.15326229 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043569047 = queryNorm
                0.38865322 = fieldWeight in 376, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=376)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    31.12.1996 19:29:41
  14. Papadakis, I. et al.: Highlighting timely information in libraries through social and semantic Web technologies (2016) 0.01
    0.006558894 = product of:
      0.019676682 = sum of:
        0.019676682 = product of:
          0.059030045 = sum of:
            0.059030045 = weight(_text_:22 in 2090) [ClassicSimilarity], result of:
              0.059030045 = score(doc=2090,freq=2.0), product of:
                0.15257138 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043569047 = queryNorm
                0.38690117 = fieldWeight in 2090, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2090)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  15. Mayr, P.; Mutschke, P.; Petras, V.: Reducing semantic complexity in distributed digital libraries : Treatment of term vagueness and document re-ranking (2008) 0.01
    0.0053038225 = product of:
      0.015911467 = sum of:
        0.015911467 = product of:
          0.047734402 = sum of:
            0.047734402 = weight(_text_:network in 1909) [ClassicSimilarity], result of:
              0.047734402 = score(doc=1909,freq=2.0), product of:
                0.19402927 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.043569047 = queryNorm
                0.2460165 = fieldWeight in 1909, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1909)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - The general science portal "vascoda" merges structured, high-quality information collections from more than 40 providers on the basis of search engine technology (FAST) and a concept which treats semantic heterogeneity between different controlled vocabularies. First experiences with the portal show some weaknesses of this approach which come out in most metadata-driven Digital Libraries (DLs) or subject specific portals. The purpose of the paper is to propose models to reduce the semantic complexity in heterogeneous DLs. The aim is to introduce value-added services (treatment of term vagueness and document re-ranking) that gain a certain quality in DLs if they are combined with heterogeneity components established in the project "Competence Center Modeling and Treatment of Semantic Heterogeneity". Design/methodology/approach - Two methods, which are derived from scientometrics and network analysis, will be implemented with the objective to re-rank result sets by the following structural properties: the ranking of the results by core journals (so-called Bradfordizing) and ranking by centrality of authors in co-authorship networks. Findings - The methods, which will be implemented, focus on the query and on the result side of a search and are designed to positively influence each other. Conceptually, they will improve the search quality and guarantee that the most relevant documents in result sets will be ranked higher. Originality/value - The central impact of the paper focuses on the integration of three structural value-adding methods, which aim at reducing the semantic complexity represented in distributed DLs at several stages in the information retrieval process: query construction, search and ranking and re-ranking.
  16. Maltese, V.; Farazi, F.: Towards the integration of knowledge organization systems with the linked data cloud (2011) 0.01
    0.0053038225 = product of:
      0.015911467 = sum of:
        0.015911467 = product of:
          0.047734402 = sum of:
            0.047734402 = weight(_text_:network in 4815) [ClassicSimilarity], result of:
              0.047734402 = score(doc=4815,freq=2.0), product of:
                0.19402927 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.043569047 = queryNorm
                0.2460165 = fieldWeight in 4815, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4815)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    In representing the shared view of all the people involved, building a knowledge organization system (KOS) from scratch is extremely costly, and it is therefore fundamental to reuse existing resources. This can be done by progressively extending the KOS with knowledge coming from similar KOSs and by promoting interoperability among them. The linked data initiative is indeed encouraging people to share and integrate their datasets into a giant network of interconnected resources. This enables different applications to interoperate and share their data. The integration should take into account the purpose of the datasets, however, and make explicit the semantics. In fact, the difference in the purpose is reflected in the difference in the semantics. With this paper we (a) highlight the potential problems that may arise by not taking into account purpose and semantics; (b) make clear how the difference in the purpose is reflected in totally different semantics and (c) provide an algorithm to translate from one semantics into another as a preliminary step towards the integration of ontologies designed for different purposes. This will allow reusing the ontologies even in contexts different from those in which they were designed.
  17. Maltese, V.; Farazi, F.: Towards the integration of knowledge organization systems with the linked data cloud (2011) 0.01
    0.0053038225 = product of:
      0.015911467 = sum of:
        0.015911467 = product of:
          0.047734402 = sum of:
            0.047734402 = weight(_text_:network in 602) [ClassicSimilarity], result of:
              0.047734402 = score(doc=602,freq=2.0), product of:
                0.19402927 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.043569047 = queryNorm
                0.2460165 = fieldWeight in 602, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=602)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    In representing the shared view of all the people involved, building a Knowledge Organization System (KOS) from scratch is extremely costly, and it is therefore fundamental to reuse existing resources. This can be done by progressively extending the KOS with knowledge coming from similar KOS and by promoting interoperability among them. The linked data initiative is indeed fostering people to share and integrate their datasets into a giant network of interconnected resources. This enables different applications to interoperate and share their data. However, the integration should take into account the purpose of the datasets and make explicit the semantics. In fact, the difference in the purpose is reflected in the difference in the semantics. With this paper we (a) highlight the potential problems that may arise by not taking into account purpose and semantics, (b) make clear how the difference in the purpose is reflected in totally different semantics and (c) provide an algorithm to translate from one semantic into another as a preliminary step towards the integration of ontologies designed for different purposes. This will allow reusing the ontologies even in contexts different from those in which they were designed.
  18. Berners-Lee, T.; Hendler, J.; Lassila, O.: Mein Computer versteht mich (2001) 0.01
    0.0052947453 = product of:
      0.015884236 = sum of:
        0.015884236 = product of:
          0.047652703 = sum of:
            0.047652703 = weight(_text_:29 in 4550) [ClassicSimilarity], result of:
              0.047652703 = score(doc=4550,freq=2.0), product of:
                0.15326229 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043569047 = queryNorm
                0.31092256 = fieldWeight in 4550, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4550)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    31.12.1996 19:29:41
  19. Tennis, J.T.: Scheme versioning in the Semantic Web (2006) 0.01
    0.0052947453 = product of:
      0.015884236 = sum of:
        0.015884236 = product of:
          0.047652703 = sum of:
            0.047652703 = weight(_text_:29 in 4939) [ClassicSimilarity], result of:
              0.047652703 = score(doc=4939,freq=2.0), product of:
                0.15326229 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043569047 = queryNorm
                0.31092256 = fieldWeight in 4939, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4939)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    23.12.2007 10:14:29
  20. Stamou, G.; Chortaras, A.: Ontological query answering over semantic data (2017) 0.01
    0.0052947453 = product of:
      0.015884236 = sum of:
        0.015884236 = product of:
          0.047652703 = sum of:
            0.047652703 = weight(_text_:29 in 3926) [ClassicSimilarity], result of:
              0.047652703 = score(doc=3926,freq=2.0), product of:
                0.15326229 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043569047 = queryNorm
                0.31092256 = fieldWeight in 3926, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3926)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Pages
    S.29-63

Authors

Languages

  • e 64
  • d 14

Types

  • a 48
  • el 17
  • m 17
  • s 12
  • n 1
  • x 1
  • More… Less…

Subjects