Search (103 results, page 1 of 6)

  • × theme_ss:"Semantic Web"
  1. Wielinga, B.; Wielemaker, J.; Schreiber, G.; Assem, M. van: Methods for porting resources to the Semantic Web (2004) 0.04
    0.04454048 = product of:
      0.06681072 = sum of:
        0.054823436 = weight(_text_:development in 4640) [ClassicSimilarity], result of:
          0.054823436 = score(doc=4640,freq=4.0), product of:
            0.16011542 = queryWeight, product of:
              3.652261 = idf(docFreq=3116, maxDocs=44218)
              0.04384008 = queryNorm
            0.34239948 = fieldWeight in 4640, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.652261 = idf(docFreq=3116, maxDocs=44218)
              0.046875 = fieldNorm(doc=4640)
        0.011987286 = product of:
          0.035961855 = sum of:
            0.035961855 = weight(_text_:29 in 4640) [ClassicSimilarity], result of:
              0.035961855 = score(doc=4640,freq=2.0), product of:
                0.1542157 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04384008 = queryNorm
                0.23319192 = fieldWeight in 4640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4640)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    Ontologies will play a central role in the development of the Semantic Web. It is unrealistic to assume that such ontologies will be developed from scratch. Rather, we assume that existing resources such as thesauri and lexical data bases will be reused in the development of ontologies for the Semantic Web. In this paper we describe a method for converting existing source material to a representation that is compatible with Semantic Web languages such as RDF(S) and OWL. The method is illustrated with three case studies: converting Wordnet, AAT and MeSH to RDF(S) and OWL.
    Date
    29. 7.2011 14:44:56
  2. O'Hara, K.; Hall, W.: Semantic Web (2009) 0.04
    0.039474797 = product of:
      0.059212193 = sum of:
        0.04522703 = weight(_text_:development in 3871) [ClassicSimilarity], result of:
          0.04522703 = score(doc=3871,freq=2.0), product of:
            0.16011542 = queryWeight, product of:
              3.652261 = idf(docFreq=3116, maxDocs=44218)
              0.04384008 = queryNorm
            0.28246516 = fieldWeight in 3871, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.652261 = idf(docFreq=3116, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3871)
        0.013985164 = product of:
          0.041955493 = sum of:
            0.041955493 = weight(_text_:29 in 3871) [ClassicSimilarity], result of:
              0.041955493 = score(doc=3871,freq=2.0), product of:
                0.1542157 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04384008 = queryNorm
                0.27205724 = fieldWeight in 3871, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3871)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    The "semantic web (SW)" is a vision of a web of linked data, allowing querying, integration, and sharing of data from distributed sources in heterogeneous formats, using ontologies to provide an associated and explicit semantic interpretation. This entry describes the series of layered formalisms and standards that underlie this vision, and chronicles their historical and ongoing development. A number of applications, scientific and otherwise, academic and commercial, are reviewed. The SW has often been a controversial enterprise, and some of the controversies are reviewed, and misconceptions defused.
    Date
    27. 8.2011 14:29:18
  3. Soergel, D.: SemWeb: Proposal for an Open, multifunctional, multilingual system for integrated access to knowledge about concepts and terminology : exploration and development of the concept (1996) 0.04
    0.03711707 = product of:
      0.055675603 = sum of:
        0.045686197 = weight(_text_:development in 3576) [ClassicSimilarity], result of:
          0.045686197 = score(doc=3576,freq=4.0), product of:
            0.16011542 = queryWeight, product of:
              3.652261 = idf(docFreq=3116, maxDocs=44218)
              0.04384008 = queryNorm
            0.2853329 = fieldWeight in 3576, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.652261 = idf(docFreq=3116, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3576)
        0.009989405 = product of:
          0.029968213 = sum of:
            0.029968213 = weight(_text_:29 in 3576) [ClassicSimilarity], result of:
              0.029968213 = score(doc=3576,freq=2.0), product of:
                0.1542157 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04384008 = queryNorm
                0.19432661 = fieldWeight in 3576, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3576)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper presents a proposal for the long-range development of an open, multifunctional, multilingual system for integrated access to many kinds of knowledge about concepts and terminology. The system would draw on existing knowledge bases that are accessible through the Internet or on CD-ROM an on a common integrated distributed knowledge base that would grow incrementally over time. Existing knowledge bases would be accessed through a common interface that would search several knowledge bases, collate the data into a common format, and present them to the user. The common integrated distributed knowledge base would provide an environment in which many contributors could carry out classification and terminological projects more efficiently, with the results available in a common format. Over time, data from other knowledge bases could be incorporated into the common knowledge base, either by actual transfer (provided the knowledge base producers are willing) or by reference through a link. Either way, such incorporation requires intellectual work but allows for tighter integration than common interface access to multiple knowledge bases. Each piece of information in the common knowledge base will have all its sources attached, providing an acknowledgment mechanism that gives due credit to all contributors. The whole system woul be designed to be usable by many levels of users for improved information exchange.
    Date
    15. 6.2010 19:25:29
  4. Matthews, B.M.: Integration via meaning : using the Semantic Web to deliver Web services (2002) 0.03
    0.03383554 = product of:
      0.05075331 = sum of:
        0.038766023 = weight(_text_:development in 3609) [ClassicSimilarity], result of:
          0.038766023 = score(doc=3609,freq=2.0), product of:
            0.16011542 = queryWeight, product of:
              3.652261 = idf(docFreq=3116, maxDocs=44218)
              0.04384008 = queryNorm
            0.242113 = fieldWeight in 3609, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.652261 = idf(docFreq=3116, maxDocs=44218)
              0.046875 = fieldNorm(doc=3609)
        0.011987286 = product of:
          0.035961855 = sum of:
            0.035961855 = weight(_text_:29 in 3609) [ClassicSimilarity], result of:
              0.035961855 = score(doc=3609,freq=2.0), product of:
                0.1542157 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04384008 = queryNorm
                0.23319192 = fieldWeight in 3609, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3609)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    The major developments ofthe World-Wide Web (WWW) in the last two years have been Web Services and the Semantic Web. The former allows the construction of distributed systems across the WWW by providing a lightweight middleware architecture. The latter provides an infrastructure for accessing resources an the WWW via their relationships with respect to conceptual descriptions. In this paper, I shall review the progress undertaken in each of these two areas. Further, I shall argue that in order for the aims of both the Semantic Web and the Web Services activities to be successful, then the Web Service architecture needs to be augmented by concepts and tools of the Semantic Web. This infrastructure will allow resource discovery, brokering and access to be enabled in a standardised, integrated and interoperable manner. Finally, I survey the CLRC Information Technology R&D programme to show how it is contributing to the development of this future infrastructure.
    Source
    Gaining insight from research information (CRIS2002): Proceedings of the 6th International Conference an Current Research Information Systems, University of Kassel, August 29 - 31, 2002. Eds: W. Adamczak u. A. Nase
  5. Franklin, R.A.: Re-inventing subject access for the semantic web (2003) 0.03
    0.03376365 = product of:
      0.050645474 = sum of:
        0.038766023 = weight(_text_:development in 2556) [ClassicSimilarity], result of:
          0.038766023 = score(doc=2556,freq=2.0), product of:
            0.16011542 = queryWeight, product of:
              3.652261 = idf(docFreq=3116, maxDocs=44218)
              0.04384008 = queryNorm
            0.242113 = fieldWeight in 2556, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.652261 = idf(docFreq=3116, maxDocs=44218)
              0.046875 = fieldNorm(doc=2556)
        0.011879452 = product of:
          0.035638355 = sum of:
            0.035638355 = weight(_text_:22 in 2556) [ClassicSimilarity], result of:
              0.035638355 = score(doc=2556,freq=2.0), product of:
                0.1535205 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04384008 = queryNorm
                0.23214069 = fieldWeight in 2556, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2556)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    First generation scholarly research on the Web lacked a firm system of authority control. Second generation Web research is beginning to model subject access with library science principles of bibliographic control and cataloguing. Harnessing the Web and organising the intellectual content with standards and controlled vocabulary provides precise search and retrieval capability, increasing relevance and efficient use of technology. Dublin Core metadata standards permit a full evaluation and cataloguing of Web resources appropriate to highly specific research needs and discovery. Current research points to a type of structure based on a system of faceted classification. This system allows the semantic and syntactic relationships to be defined. Controlled vocabulary, such as the Library of Congress Subject Headings, can be assigned, not in a hierarchical structure, but rather as descriptive facets of relating concepts. Web design features such as this are adding value to discovery and filtering out data that lack authority. The system design allows for scalability and extensibility, two technical features that are integral to future development of the digital library and resource discovery.
    Date
    30.12.2008 18:22:46
  6. Aberer, K. et al.: ¬The Semantic Web : 6th International Semantic Web Conference, 2nd Asian Semantic Web Conference, ISWC 2007 + ASWC 2007, Busan, Korea, November 11-15, 2007 : proceedings (2007) 0.03
    0.029693654 = product of:
      0.04454048 = sum of:
        0.036548957 = weight(_text_:development in 2477) [ClassicSimilarity], result of:
          0.036548957 = score(doc=2477,freq=4.0), product of:
            0.16011542 = queryWeight, product of:
              3.652261 = idf(docFreq=3116, maxDocs=44218)
              0.04384008 = queryNorm
            0.22826631 = fieldWeight in 2477, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.652261 = idf(docFreq=3116, maxDocs=44218)
              0.03125 = fieldNorm(doc=2477)
        0.0079915235 = product of:
          0.02397457 = sum of:
            0.02397457 = weight(_text_:29 in 2477) [ClassicSimilarity], result of:
              0.02397457 = score(doc=2477,freq=2.0), product of:
                0.1542157 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04384008 = queryNorm
                0.15546128 = fieldWeight in 2477, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2477)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    This book constitutes the refereed proceedings of the joint 6th International Semantic Web Conference, ISWC 2007, and the 2nd Asian Semantic Web Conference, ASWC 2007, held in Busan, Korea, in November 2007. The 50 revised full academic papers and 12 revised application papers presented together with 5 Semantic Web Challenge papers and 12 selected doctoral consortium articles were carefully reviewed and selected from a total of 257 submitted papers to the academic track and 29 to the applications track. The papers address all current issues in the field of the semantic Web, ranging from theoretical and foundational aspects to various applied topics such as management of semantic Web data, ontologies, semantic Web architecture, social semantic Web, as well as applications of the semantic Web. Short descriptions of the top five winning applications submitted to the Semantic Web Challenge competition conclude the volume.
    LCSH
    Web site development / Congresses
    Subject
    Web site development / Congresses
  7. Daconta, M.C.; Oberst, L.J.; Smith, K.T.: ¬The Semantic Web : A guide to the future of XML, Web services and knowledge management (2003) 0.03
    0.02964573 = product of:
      0.044468593 = sum of:
        0.036548957 = weight(_text_:development in 320) [ClassicSimilarity], result of:
          0.036548957 = score(doc=320,freq=4.0), product of:
            0.16011542 = queryWeight, product of:
              3.652261 = idf(docFreq=3116, maxDocs=44218)
              0.04384008 = queryNorm
            0.22826631 = fieldWeight in 320, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.652261 = idf(docFreq=3116, maxDocs=44218)
              0.03125 = fieldNorm(doc=320)
        0.007919635 = product of:
          0.023758903 = sum of:
            0.023758903 = weight(_text_:22 in 320) [ClassicSimilarity], result of:
              0.023758903 = score(doc=320,freq=2.0), product of:
                0.1535205 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04384008 = queryNorm
                0.15476047 = fieldWeight in 320, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=320)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Date
    22. 5.2007 10:37:38
    LCSH
    Web site development
    Subject
    Web site development
  8. Greenberg, J.: Advancing Semantic Web via library functions (2006) 0.02
    0.022381578 = product of:
      0.06714473 = sum of:
        0.06714473 = weight(_text_:development in 244) [ClassicSimilarity], result of:
          0.06714473 = score(doc=244,freq=6.0), product of:
            0.16011542 = queryWeight, product of:
              3.652261 = idf(docFreq=3116, maxDocs=44218)
              0.04384008 = queryNorm
            0.41935202 = fieldWeight in 244, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.652261 = idf(docFreq=3116, maxDocs=44218)
              0.046875 = fieldNorm(doc=244)
      0.33333334 = coord(1/3)
    
    Abstract
    This article explores the applicability primary library functions (collection development, cataloging, reference, and circulation) to the Semantic Web. The article defines the Semantic Web, identifies similarities between the library institution and the Semantic Web, and presents research questions guiding the inquiry. The article addresses each library function and demonstrates the applicability of each function's polices to Semantic Web development. Results indicate that library functions are applicable to Semantic Web, with "collection development" translating to "Semantic Web selection;" "cataloging" translating to "Semantic Web 'semantic' representation;" "reference" translating to "Semantic Web service," and circulation translating to "Semantic Web resource use." The last part of this article includes a discussion about the lack of embrace between the library and the Semantic Web communities, recommendations for improving this gap, and research conclusions.
  9. Tennis, J.T.; Sutton, S.A.: Extending the Simple Knowledge Organization System for concept management in vocabulary development applications (2008) 0.02
    0.022381578 = product of:
      0.06714473 = sum of:
        0.06714473 = weight(_text_:development in 1337) [ClassicSimilarity], result of:
          0.06714473 = score(doc=1337,freq=6.0), product of:
            0.16011542 = queryWeight, product of:
              3.652261 = idf(docFreq=3116, maxDocs=44218)
              0.04384008 = queryNorm
            0.41935202 = fieldWeight in 1337, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.652261 = idf(docFreq=3116, maxDocs=44218)
              0.046875 = fieldNorm(doc=1337)
      0.33333334 = coord(1/3)
    
    Abstract
    In this article, we describe the development of an extension to the Simple Knowledge Organization System (SKOS) to accommodate the needs of vocabulary development applications (VDA) managing metadata schemes and requiring close tracking of change to both those schemes and their member concepts. We take a neopragmatic epistemic stance in asserting the need for an entity in SKOS modeling to mediate between the abstract concept and the concrete scheme. While the SKOS model sufficiently describes entities for modeling the current state of a scheme in support of indexing and search on the Semantic Web, it lacks the expressive power to serve the needs of VDA needing to maintain scheme historical continuity. We demonstrate preliminarily that conceptualizations drawn from empirical work in modeling entities in the bibliographic universe, such as works, texts, and exemplars, can provide the basis for SKOS extension in ways that support more rigorous demands of capturing concept evolution in VDA.
  10. Joint, N.: ¬The practitioner librarian and the semantic web : ANTAEUS (2008) 0.02
    0.022381578 = product of:
      0.06714473 = sum of:
        0.06714473 = weight(_text_:development in 2012) [ClassicSimilarity], result of:
          0.06714473 = score(doc=2012,freq=6.0), product of:
            0.16011542 = queryWeight, product of:
              3.652261 = idf(docFreq=3116, maxDocs=44218)
              0.04384008 = queryNorm
            0.41935202 = fieldWeight in 2012, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.652261 = idf(docFreq=3116, maxDocs=44218)
              0.046875 = fieldNorm(doc=2012)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose -To describe and evoke the potential impact of semantic web systems at the level of library practice. Design/methodology/approach - A general outline of some of the broad issues associated with the semantic web, together with a brief, simple explanation of basic semantic web procedures with some examples of specific practical outcomes of semantic web development. Findings - That the semantic web is of central relevance to contemporary LIS practitioners, whose involvement in its development is necessary in order to determine what will be the true benefits of this form of information service innovation. Research limitations/implications - Since much of the initial discussion of this topic has been developmental and futuristic, applied practitioner-oriented research is required to ground these discussions in a firm bedrock of applications. Practical implications - semantic web technologies are of great practical relevance to areas of LIS practice such as digital repository development and open access services. Originality/value - The paper attempts to bridge the gap between the abstractions of theoretical writing in this area and the concerns of the working library professional.
  11. Multimedia content and the Semantic Web : methods, standards, and tools (2005) 0.02
    0.020944238 = product of:
      0.031416357 = sum of:
        0.022843098 = weight(_text_:development in 150) [ClassicSimilarity], result of:
          0.022843098 = score(doc=150,freq=4.0), product of:
            0.16011542 = queryWeight, product of:
              3.652261 = idf(docFreq=3116, maxDocs=44218)
              0.04384008 = queryNorm
            0.14266644 = fieldWeight in 150, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.652261 = idf(docFreq=3116, maxDocs=44218)
              0.01953125 = fieldNorm(doc=150)
        0.008573256 = product of:
          0.02571977 = sum of:
            0.02571977 = weight(_text_:22 in 150) [ClassicSimilarity], result of:
              0.02571977 = score(doc=150,freq=6.0), product of:
                0.1535205 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04384008 = queryNorm
                0.16753313 = fieldWeight in 150, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=150)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Classification
    006.7 22
    Date
    7. 3.2007 19:30:22
    DDC
    006.7 22
    Footnote
    Semantic web technologies are explained, and ontology representation is emphasized. There is an excellent summary of the fundamental theory behind applying a knowledge-engineering approach to vision problems. This summary represents the concept of the semantic web and multimedia content analysis. A definition of the fuzzy knowledge representation that can be used for realization in multimedia content applications has been provided, with a comprehensive analysis. The second part of the book introduces the multimedia content analysis approaches and applications. In addition, some examples of methods applicable to multimedia content analysis are presented. Multimedia content analysis is a very diverse field and concerns many other research fields at the same time; this creates strong diversity issues, as everything from low-level features (e.g., colors, DCT coefficients, motion vectors, etc.) up to the very high and semantic level (e.g., Object, Events, Tracks, etc.) are involved. The second part includes topics on structure identification (e.g., shot detection for video sequences), and object-based video indexing. These conventional analysis methods are supplemented by results on semantic multimedia analysis, including three detailed chapters on the development and use of knowledge models for automatic multimedia analysis. Starting from object-based indexing and continuing with machine learning, these three chapters are very logically organized. Because of the diversity of this research field, including several chapters of recent research results is not sufficient to cover the state of the art of multimedia. The editors of the book should write an introductory chapter about multimedia content analysis approaches, basic problems, and technical issues and challenges, and try to survey the state of the art of the field and thus introduce the field to the reader.
    The final part of the book discusses research in multimedia content management systems and the semantic web, and presents examples and applications for semantic multimedia analysis in search and retrieval systems. These chapters describe example systems in which current projects have been implemented, and include extensive results and real demonstrations. For example, real case scenarios such as ECommerce medical applications and Web services have been introduced. Topics in natural language, speech and image processing techniques and their application for multimedia indexing, and content-based retrieval have been elaborated upon with extensive examples and deployment methods. The editors of the book themselves provide the readers with a chapter about their latest research results on knowledge-based multimedia content indexing and retrieval. Some interesting applications for multimedia content and the semantic web are introduced. Applications that have taken advantage of the metadata provided by MPEG7 in order to realize advance-access services for multimedia content have been provided. The applications discussed in the third part of the book provide useful guidance to researchers and practitioners properly planning to implement semantic multimedia analysis techniques in new research and development projects in both academia and industry. A fourth part should be added to this book: performance measurements for integrated approaches of multimedia analysis and the semantic web. Performance of the semantic approach is a very sophisticated issue and requires extensive elaboration and effort. Measuring the semantic search is an ongoing research area; several chapters concerning performance measurement and analysis would be required to adequately cover this area and introduce it to readers."
  12. Malmsten, M.: Making a library catalogue part of the Semantic Web (2008) 0.02
    0.018563017 = product of:
      0.055689048 = sum of:
        0.055689048 = product of:
          0.08353357 = sum of:
            0.041955493 = weight(_text_:29 in 2640) [ClassicSimilarity], result of:
              0.041955493 = score(doc=2640,freq=2.0), product of:
                0.1542157 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04384008 = queryNorm
                0.27205724 = fieldWeight in 2640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2640)
            0.04157808 = weight(_text_:22 in 2640) [ClassicSimilarity], result of:
              0.04157808 = score(doc=2640,freq=2.0), product of:
                0.1535205 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04384008 = queryNorm
                0.2708308 = fieldWeight in 2640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2640)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Date
    20. 2.2009 10:29:39
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  13. Harper, C.A.; Tillett, B.B.: Library of Congress controlled vocabularies and their application to the Semantic Web (2006) 0.02
    0.018274479 = product of:
      0.054823436 = sum of:
        0.054823436 = weight(_text_:development in 242) [ClassicSimilarity], result of:
          0.054823436 = score(doc=242,freq=4.0), product of:
            0.16011542 = queryWeight, product of:
              3.652261 = idf(docFreq=3116, maxDocs=44218)
              0.04384008 = queryNorm
            0.34239948 = fieldWeight in 242, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.652261 = idf(docFreq=3116, maxDocs=44218)
              0.046875 = fieldNorm(doc=242)
      0.33333334 = coord(1/3)
    
    Abstract
    This article discusses how various controlled vocabularies, classification schemes and thesauri can serve as some of the building blocks of the Semantic Web. These vocabularies have been developed over the course of decades, and can be put to great use in the development of robust web services and Semantic Web technologies. The article covers how initial collaboration between the Semantic Web, Library and Metadata communities are creating partnerships to complete work in this area. It then discusses some cores principles of authority control before talking more specifically about subject and genre vocabularies and name authority. It is hoped that future systems for internationally shared authority data will link the world's authority data from trusted sources to benefit users worldwide. Finally, the article looks at how encoding and markup of vocabularies can help ensure compatibility with the current and future state of Semantic Web development and provides examples of how this work can help improve the findability and navigation of information on the World Wide Web.
  14. Severiens, T.; Thiemann, C.: RDF database for PhysNet and similar portals (2006) 0.02
    0.017229345 = product of:
      0.05168803 = sum of:
        0.05168803 = weight(_text_:development in 245) [ClassicSimilarity], result of:
          0.05168803 = score(doc=245,freq=2.0), product of:
            0.16011542 = queryWeight, product of:
              3.652261 = idf(docFreq=3116, maxDocs=44218)
              0.04384008 = queryNorm
            0.32281733 = fieldWeight in 245, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.652261 = idf(docFreq=3116, maxDocs=44218)
              0.0625 = fieldNorm(doc=245)
      0.33333334 = coord(1/3)
    
    Abstract
    PhysNet (www.physnet.net) is a portal for Physics run since 1995 and continuously being developed; it today uses an OWLLite ontology and mySQL database for storing triples with the facts, such as department information, postal addresses, GPS coordinates, URLs of publication repositories, etc. The article focuses on the structure and the development of the underlying ontology; it also gives a detailed overview of an online web-based editorial tool, to maintain the facts database.
  15. Sure, Y.; Erdmann, M.; Studer, R.: OntoEdit: collaborative engineering of ontologies (2004) 0.02
    0.017229345 = product of:
      0.05168803 = sum of:
        0.05168803 = weight(_text_:development in 4405) [ClassicSimilarity], result of:
          0.05168803 = score(doc=4405,freq=8.0), product of:
            0.16011542 = queryWeight, product of:
              3.652261 = idf(docFreq=3116, maxDocs=44218)
              0.04384008 = queryNorm
            0.32281733 = fieldWeight in 4405, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.652261 = idf(docFreq=3116, maxDocs=44218)
              0.03125 = fieldNorm(doc=4405)
      0.33333334 = coord(1/3)
    
    Abstract
    Developing ontologies is central to our vision of Semantic Web-based knowledge management. The methodology described in Chapter 3 guides the development of ontologies for different applications. However, because of the size of ontologies, their complexity, their formal underpinnings and the necessity to come towards a shared understanding within a group of people when defining an ontology, ontology construction is still far from being a well-understood process. Concerning the methodology, OntoEdit focuses on three of the main steps for ontology development (the methodology is described in Chapter 3), viz. the kick off, refinement, and evaluation. We describe the steps supported by OntoEdit and focus on collaborative aspects that occur during each of the step. First, all requirements of the envisaged ontology are collected during the kick off phase. Typically for ontology engineering, ontology engineers and domain experts are joined in a team that works together on a description of the domain and the goal of the ontology, design guidelines, available knowledge sources (e.g. re-usable ontologies and thesauri, etc.), potential users and use cases and applications supported by the ontology. The output of this phase is a semiformal description of the ontology. Second, during the refinement phase, the team extends the semi-formal description in several iterations and formalizes it in an appropriate representation language like RDF(S) or, more advanced, DAML1OIL. The output of this phase is a mature ontology (the 'target ontology'). Third, the target ontology needs to be evaluated according to the requirement specifications. Typically this phase serves as a proof for the usefulness of ontologies (and ontology-based applications) and may involve the engineering team as well as end users of the targeted application. The output of this phase is an evaluated ontology, ready for roll-out into a productive environment. Support for these collaborative development steps within the ontology development methodology is crucial in order to meet the conflicting needs for ease of use and construction of complex ontology structures. We now illustrate OntoEdit's support for each of the supported steps. The examples shown are taken from the Swiss Life case study on skills management (cf. Chapter 12).
  16. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.02
    0.01591116 = product of:
      0.04773348 = sum of:
        0.04773348 = product of:
          0.07160021 = sum of:
            0.035961855 = weight(_text_:29 in 4649) [ClassicSimilarity], result of:
              0.035961855 = score(doc=4649,freq=2.0), product of:
                0.1542157 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04384008 = queryNorm
                0.23319192 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
            0.035638355 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
              0.035638355 = score(doc=4649,freq=2.0), product of:
                0.1535205 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04384008 = queryNorm
                0.23214069 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Date
    29. 7.2011 14:44:56
    26.12.2011 13:40:22
  17. Hooland, S. van; Verborgh, R.; Wilde, M. De; Hercher, J.; Mannens, E.; Wa, R.Van de: Evaluating the success of vocabulary reconciliation for cultural heritage collections (2013) 0.02
    0.01591116 = product of:
      0.04773348 = sum of:
        0.04773348 = product of:
          0.07160021 = sum of:
            0.035961855 = weight(_text_:29 in 662) [ClassicSimilarity], result of:
              0.035961855 = score(doc=662,freq=2.0), product of:
                0.1542157 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04384008 = queryNorm
                0.23319192 = fieldWeight in 662, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=662)
            0.035638355 = weight(_text_:22 in 662) [ClassicSimilarity], result of:
              0.035638355 = score(doc=662,freq=2.0), product of:
                0.1535205 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04384008 = queryNorm
                0.23214069 = fieldWeight in 662, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=662)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Date
    22. 3.2013 19:29:20
  18. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.02
    0.015473261 = product of:
      0.04641978 = sum of:
        0.04641978 = product of:
          0.13925934 = sum of:
            0.13925934 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.13925934 = score(doc=701,freq=2.0), product of:
                0.37167668 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04384008 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  19. Weibel, S.L.: Social Bibliography : a personal perspective on libraries and the Semantic Web (2006) 0.02
    0.015075676 = product of:
      0.04522703 = sum of:
        0.04522703 = weight(_text_:development in 250) [ClassicSimilarity], result of:
          0.04522703 = score(doc=250,freq=2.0), product of:
            0.16011542 = queryWeight, product of:
              3.652261 = idf(docFreq=3116, maxDocs=44218)
              0.04384008 = queryNorm
            0.28246516 = fieldWeight in 250, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.652261 = idf(docFreq=3116, maxDocs=44218)
              0.0546875 = fieldNorm(doc=250)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper presents a personal perspective on libraries and the Semantic Web. The paper discusses computing power, increased availability of processable text, social software developments and the ideas underlying Web 2.0 and the impact of these developments in the context of libraries and information. The article concludes with a discussion of social bibliography and the declining hegemony of catalog records, and emphasizes the strengths of librarianship and the profession's ability to contribute to Semantic Web development.
  20. Fensel, D.; Staab, S.; Studer, R.; Harmelen, F. van; Davies, J.: ¬A future perspective : exploiting peer-to-peer and the Semantic Web for knowledge management (2004) 0.02
    0.015075676 = product of:
      0.04522703 = sum of:
        0.04522703 = weight(_text_:development in 2262) [ClassicSimilarity], result of:
          0.04522703 = score(doc=2262,freq=2.0), product of:
            0.16011542 = queryWeight, product of:
              3.652261 = idf(docFreq=3116, maxDocs=44218)
              0.04384008 = queryNorm
            0.28246516 = fieldWeight in 2262, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.652261 = idf(docFreq=3116, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2262)
      0.33333334 = coord(1/3)
    
    Abstract
    Over the past few years, we have seen a growing interest in the potential of both peer-to-peer (P2P) computing and the use of more formal approaches to knowledge management, involving the development of ontologies. This penultimate chapter discusses possibilities that both approaches may offer for more effective and efficient knowledge management. In particular, we investigate how the two paradigms may be combined. In this chapter, we describe our vision in terms of a set of future steps that need to be taken to bring the results described in earlier chapters to their full potential.

Authors

Languages

  • e 88
  • d 14
  • f 1
  • More… Less…

Types

  • a 68
  • m 22
  • el 20
  • s 14
  • n 1
  • x 1
  • More… Less…

Subjects