Search (314 results, page 1 of 16)

  • × theme_ss:"Semantic Web"
  1. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.07
    0.066741586 = product of:
      0.13348317 = sum of:
        0.031515844 = product of:
          0.094547525 = sum of:
            0.094547525 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.094547525 = score(doc=701,freq=2.0), product of:
                0.25234294 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.029764405 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.007419804 = weight(_text_:a in 701) [ClassicSimilarity], result of:
          0.007419804 = score(doc=701,freq=36.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.2161963 = fieldWeight in 701, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.094547525 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.094547525 = score(doc=701,freq=2.0), product of:
            0.25234294 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.029764405 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.5 = coord(3/6)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  2. Multimedia content and the Semantic Web : methods, standards, and tools (2005) 0.01
    0.0062532714 = product of:
      0.012506543 = sum of:
        0.0037864032 = weight(_text_:a in 150) [ClassicSimilarity], result of:
          0.0037864032 = score(doc=150,freq=24.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.11032722 = fieldWeight in 150, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.01953125 = fieldNorm(doc=150)
        0.0028994875 = product of:
          0.01159795 = sum of:
            0.01159795 = weight(_text_:g in 150) [ClassicSimilarity], result of:
              0.01159795 = score(doc=150,freq=2.0), product of:
                0.11179353 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.029764405 = queryNorm
                0.10374437 = fieldWeight in 150, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=150)
          0.25 = coord(1/4)
        0.0058206525 = product of:
          0.017461957 = sum of:
            0.017461957 = weight(_text_:22 in 150) [ClassicSimilarity], result of:
              0.017461957 = score(doc=150,freq=6.0), product of:
                0.104229875 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029764405 = queryNorm
                0.16753313 = fieldWeight in 150, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=150)
          0.33333334 = coord(1/3)
      0.5 = coord(3/6)
    
    Classification
    006.7 22
    Date
    7. 3.2007 19:30:22
    DDC
    006.7 22
    Editor
    Stamou, G. u. S. Kollias
    Footnote
    Rez. in: JASIST 58(2007) no.3, S.457-458 (A.M.A. Ahmad): "The concept of the semantic web has emerged because search engines and text-based searching are no longer adequate, as these approaches involve an extensive information retrieval process. The deployed searching and retrieving descriptors arc naturally subjective and their deployment is often restricted to the specific application domain for which the descriptors were configured. The new era of information technology imposes different kinds of requirements and challenges. Automatic extracted audiovisual features are required, as these features are more objective, domain-independent, and more native to audiovisual content. This book is a useful guide for researchers, experts, students, and practitioners; it is a very valuable reference and can lead them through their exploration and research in multimedia content and the semantic web. The book is well organized, and introduces the concept of the semantic web and multimedia content analysis to the reader through a logical sequence from standards and hypotheses through system examples, presenting relevant tools and methods. But in some chapters readers will need a good technical background to understand some of the details. Readers may attain sufficient knowledge here to start projects or research related to the book's theme; recent results and articles related to the active research area of integrating multimedia with semantic web technologies are included. This book includes full descriptions of approaches to specific problem domains such as content search, indexing, and retrieval. This book will be very useful to researchers in the multimedia content analysis field who wish to explore the benefits of emerging semantic web technologies in applying multimedia content approaches. The first part of the book covers the definition of the two basic terms multimedia content and semantic web. The Moving Picture Experts Group standards MPEG7 and MPEG21 are quoted extensively. In addition, the means of multimedia content description are elaborated upon and schematically drawn. This extensive description is introduced by authors who are actively involved in those standards and have been participating in the work of the International Organization for Standardization (ISO)/MPEG for many years. On the other hand, this results in bias against the ad hoc or nonstandard tools for multimedia description in favor of the standard approaches. This is a general book for multimedia content; more emphasis on the general multimedia description and extraction could be provided.
    Semantic web technologies are explained, and ontology representation is emphasized. There is an excellent summary of the fundamental theory behind applying a knowledge-engineering approach to vision problems. This summary represents the concept of the semantic web and multimedia content analysis. A definition of the fuzzy knowledge representation that can be used for realization in multimedia content applications has been provided, with a comprehensive analysis. The second part of the book introduces the multimedia content analysis approaches and applications. In addition, some examples of methods applicable to multimedia content analysis are presented. Multimedia content analysis is a very diverse field and concerns many other research fields at the same time; this creates strong diversity issues, as everything from low-level features (e.g., colors, DCT coefficients, motion vectors, etc.) up to the very high and semantic level (e.g., Object, Events, Tracks, etc.) are involved. The second part includes topics on structure identification (e.g., shot detection for video sequences), and object-based video indexing. These conventional analysis methods are supplemented by results on semantic multimedia analysis, including three detailed chapters on the development and use of knowledge models for automatic multimedia analysis. Starting from object-based indexing and continuing with machine learning, these three chapters are very logically organized. Because of the diversity of this research field, including several chapters of recent research results is not sufficient to cover the state of the art of multimedia. The editors of the book should write an introductory chapter about multimedia content analysis approaches, basic problems, and technical issues and challenges, and try to survey the state of the art of the field and thus introduce the field to the reader.
    The final part of the book discusses research in multimedia content management systems and the semantic web, and presents examples and applications for semantic multimedia analysis in search and retrieval systems. These chapters describe example systems in which current projects have been implemented, and include extensive results and real demonstrations. For example, real case scenarios such as ECommerce medical applications and Web services have been introduced. Topics in natural language, speech and image processing techniques and their application for multimedia indexing, and content-based retrieval have been elaborated upon with extensive examples and deployment methods. The editors of the book themselves provide the readers with a chapter about their latest research results on knowledge-based multimedia content indexing and retrieval. Some interesting applications for multimedia content and the semantic web are introduced. Applications that have taken advantage of the metadata provided by MPEG7 in order to realize advance-access services for multimedia content have been provided. The applications discussed in the third part of the book provide useful guidance to researchers and practitioners properly planning to implement semantic multimedia analysis techniques in new research and development projects in both academia and industry. A fourth part should be added to this book: performance measurements for integrated approaches of multimedia analysis and the semantic web. Performance of the semantic approach is a very sophisticated issue and requires extensive elaboration and effort. Measuring the semantic search is an ongoing research area; several chapters concerning performance measurement and analysis would be required to adequately cover this area and introduce it to readers."
  3. Stamou, G.; Chortaras, A.: Ontological query answering over semantic data (2017) 0.01
    0.006022706 = product of:
      0.018068118 = sum of:
        0.004946536 = weight(_text_:a in 3926) [ClassicSimilarity], result of:
          0.004946536 = score(doc=3926,freq=4.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.14413087 = fieldWeight in 3926, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=3926)
        0.013121582 = product of:
          0.052486327 = sum of:
            0.052486327 = weight(_text_:g in 3926) [ClassicSimilarity], result of:
              0.052486327 = score(doc=3926,freq=4.0), product of:
                0.11179353 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.029764405 = queryNorm
                0.46949342 = fieldWeight in 3926, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3926)
          0.25 = coord(1/4)
      0.33333334 = coord(2/6)
    
    Source
    Reasoning Web: Semantic Interoperability on the Web, 13th International Summer School 2017, London, UK, July 7-11, 2017, Tutorial Lectures. Eds.: Ianni, G. et al
    Type
    a
  4. Papadakis, I. et al.: Highlighting timely information in libraries through social and semantic Web technologies (2016) 0.01
    0.0059381276 = product of:
      0.017814383 = sum of:
        0.0043721613 = weight(_text_:a in 2090) [ClassicSimilarity], result of:
          0.0043721613 = score(doc=2090,freq=2.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.12739488 = fieldWeight in 2090, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=2090)
        0.013442221 = product of:
          0.040326662 = sum of:
            0.040326662 = weight(_text_:22 in 2090) [ClassicSimilarity], result of:
              0.040326662 = score(doc=2090,freq=2.0), product of:
                0.104229875 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029764405 = queryNorm
                0.38690117 = fieldWeight in 2090, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2090)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
    Type
    a
  5. Suchanek, F.M.; Kasneci, G.; Weikum, G.: YAGO: a large ontology from Wikipedia and WordNet (2008) 0.01
    0.005753664 = product of:
      0.017260991 = sum of:
        0.007419804 = weight(_text_:a in 3404) [ClassicSimilarity], result of:
          0.007419804 = score(doc=3404,freq=16.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.2161963 = fieldWeight in 3404, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3404)
        0.009841187 = product of:
          0.039364748 = sum of:
            0.039364748 = weight(_text_:g in 3404) [ClassicSimilarity], result of:
              0.039364748 = score(doc=3404,freq=4.0), product of:
                0.11179353 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.029764405 = queryNorm
                0.35212007 = fieldWeight in 3404, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3404)
          0.25 = coord(1/4)
      0.33333334 = coord(2/6)
    
    Abstract
    This article presents YAGO, a large ontology with high coverage and precision. YAGO has been automatically derived from Wikipedia and WordNet. It comprises entities and relations, and currently contains more than 1.7 million entities and 15 million facts. These include the taxonomic Is-A hierarchy as well as semantic relations between entities. The facts for YAGO have been extracted from the category system and the infoboxes of Wikipedia and have been combined with taxonomic relations from WordNet. Type checking techniques help us keep YAGO's precision at 95%-as proven by an extensive evaluation study. YAGO is based on a clean logical model with a decidable consistency. Furthermore, it allows representing n-ary relations in a natural way while maintaining compatibility with RDFS. A powerful query model facilitates access to YAGO's data.
    Type
    a
  6. Faaborg, A.; Lagoze, C.: Semantic browsing (2003) 0.01
    0.005635417 = product of:
      0.01690625 = sum of:
        0.007496695 = weight(_text_:a in 1026) [ClassicSimilarity], result of:
          0.007496695 = score(doc=1026,freq=12.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.21843673 = fieldWeight in 1026, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1026)
        0.009409554 = product of:
          0.028228661 = sum of:
            0.028228661 = weight(_text_:22 in 1026) [ClassicSimilarity], result of:
              0.028228661 = score(doc=1026,freq=2.0), product of:
                0.104229875 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029764405 = queryNorm
                0.2708308 = fieldWeight in 1026, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1026)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    We have created software applications that allow users to both author and use Semantic Web metadata. To create and use a layer of semantic content on top of the existing Web, we have (1) implemented a user interface that expedites the task of attributing metadata to resources on the Web, and (2) augmented a Web browser to leverage this semantic metadata to provide relevant information and tasks to the user. This project provides a framework for annotating and reorganizing existing files, pages, and sites on the Web that is similar to Vannevar Bushrsquos original concepts of trail blazing and associative indexing.
    Source
    Research and advanced technology for digital libraries : 7th European Conference, proceedings / ECDL 2003, Trondheim, Norway, August 17-22, 2003
    Type
    a
  7. Suchanek, F.M.; Kasneci, G.; Weikum, G.: YAGO: a core of semantic knowledge unifying WordNet and Wikipedia (2007) 0.01
    0.0054223086 = product of:
      0.016266925 = sum of:
        0.0064257383 = weight(_text_:a in 3403) [ClassicSimilarity], result of:
          0.0064257383 = score(doc=3403,freq=12.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.18723148 = fieldWeight in 3403, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3403)
        0.009841187 = product of:
          0.039364748 = sum of:
            0.039364748 = weight(_text_:g in 3403) [ClassicSimilarity], result of:
              0.039364748 = score(doc=3403,freq=4.0), product of:
                0.11179353 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.029764405 = queryNorm
                0.35212007 = fieldWeight in 3403, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3403)
          0.25 = coord(1/4)
      0.33333334 = coord(2/6)
    
    Abstract
    We present YAGO, a light-weight and extensible ontology with high coverage and quality. YAGO builds on entities and relations and currently contains more than 1 million entities and 5 million facts. This includes the Is-A hierarchy as well as non-taxonomic relations between entities (such as hasWonPrize). The facts have been automatically extracted from Wikipedia and unified with WordNet, using a carefully designed combination of rule-based and heuristic methods described in this paper. The resulting knowledge base is a major step beyond WordNet: in quality by adding knowledge about individuals like persons, organizations, products, etc. with their semantic relationships - and in quantity by increasing the number of facts by more than an order of magnitude. Our empirical evaluation of fact correctness shows an accuracy of about 95%. YAGO is based on a logically clean model, which is decidable, extensible, and compatible with RDFS. Finally, we show how YAGO can be further extended by state-of-the-art information extraction techniques.
  8. Schreiber, G.: Principles and pragmatics of a Semantic Culture Web : tearing down walls and building bridges (2008) 0.01
    0.0053233705 = product of:
      0.01597011 = sum of:
        0.0043721613 = weight(_text_:a in 3764) [ClassicSimilarity], result of:
          0.0043721613 = score(doc=3764,freq=2.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.12739488 = fieldWeight in 3764, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=3764)
        0.01159795 = product of:
          0.0463918 = sum of:
            0.0463918 = weight(_text_:g in 3764) [ClassicSimilarity], result of:
              0.0463918 = score(doc=3764,freq=2.0), product of:
                0.11179353 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.029764405 = queryNorm
                0.4149775 = fieldWeight in 3764, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3764)
          0.25 = coord(1/4)
      0.33333334 = coord(2/6)
    
  9. Heflin, J.; Hendler, J.: Semantic interoperability on the Web (2000) 0.01
    0.0051768604 = product of:
      0.015530581 = sum of:
        0.006121026 = weight(_text_:a in 759) [ClassicSimilarity], result of:
          0.006121026 = score(doc=759,freq=8.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.17835285 = fieldWeight in 759, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=759)
        0.009409554 = product of:
          0.028228661 = sum of:
            0.028228661 = weight(_text_:22 in 759) [ClassicSimilarity], result of:
              0.028228661 = score(doc=759,freq=2.0), product of:
                0.104229875 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029764405 = queryNorm
                0.2708308 = fieldWeight in 759, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=759)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    XML will have a profound impact on the way data is exchanged on the Internet. An important feature of this language is the separation of content from presentation, which makes it easier to select and/or reformat the data. However, due to the likelihood of numerous industry and domain specific DTDs, those who wish to integrate information will still be faced with the problem of semantic interoperability. In this paper we discuss why this problem is not solved by XML, and then discuss why the Resource Description Framework is only a partial solution. We then present the SHOE language, which we feel has many of the features necessary to enable a semantic web, and describe an existing set of tools that make it easy to use the language.
    Date
    11. 5.2013 19:22:18
    Type
    a
  10. Malmsten, M.: Making a library catalogue part of the Semantic Web (2008) 0.00
    0.0049035065 = product of:
      0.0147105185 = sum of:
        0.005300964 = weight(_text_:a in 2640) [ClassicSimilarity], result of:
          0.005300964 = score(doc=2640,freq=6.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.1544581 = fieldWeight in 2640, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2640)
        0.009409554 = product of:
          0.028228661 = sum of:
            0.028228661 = weight(_text_:22 in 2640) [ClassicSimilarity], result of:
              0.028228661 = score(doc=2640,freq=2.0), product of:
                0.104229875 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029764405 = queryNorm
                0.2708308 = fieldWeight in 2640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2640)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Library catalogues contain an enormous amount of structured, high-quality data, however, this data is generally not made available to semantic web applications. In this paper we describe the tools and techniques used to make the Swedish Union Catalogue (LIBRIS) part of the Semantic Web and Linked Data. The focus is on links to and between resources and the mechanisms used to make data available, rather than perfect description of the individual resources. We also present a method of creating links between records of the same work.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
    Type
    a
  11. Blumauer, A.; Pellegrini, T.: Semantic Web Revisited : Eine kurze Einführung in das Social Semantic Web (2009) 0.00
    0.0049035065 = product of:
      0.0147105185 = sum of:
        0.005300964 = weight(_text_:a in 4855) [ClassicSimilarity], result of:
          0.005300964 = score(doc=4855,freq=6.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.1544581 = fieldWeight in 4855, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4855)
        0.009409554 = product of:
          0.028228661 = sum of:
            0.028228661 = weight(_text_:22 in 4855) [ClassicSimilarity], result of:
              0.028228661 = score(doc=4855,freq=2.0), product of:
                0.104229875 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029764405 = queryNorm
                0.2708308 = fieldWeight in 4855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4855)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Pages
    S.3-22
    Source
    Social Semantic Web: Web 2.0, was nun? Hrsg.: A. Blumauer u. T. Pellegrini
    Type
    a
  12. Franklin, R.A.: Re-inventing subject access for the semantic web (2003) 0.00
    0.0048303567 = product of:
      0.01449107 = sum of:
        0.0064257383 = weight(_text_:a in 2556) [ClassicSimilarity], result of:
          0.0064257383 = score(doc=2556,freq=12.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.18723148 = fieldWeight in 2556, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2556)
        0.008065332 = product of:
          0.024195995 = sum of:
            0.024195995 = weight(_text_:22 in 2556) [ClassicSimilarity], result of:
              0.024195995 = score(doc=2556,freq=2.0), product of:
                0.104229875 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029764405 = queryNorm
                0.23214069 = fieldWeight in 2556, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2556)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    First generation scholarly research on the Web lacked a firm system of authority control. Second generation Web research is beginning to model subject access with library science principles of bibliographic control and cataloguing. Harnessing the Web and organising the intellectual content with standards and controlled vocabulary provides precise search and retrieval capability, increasing relevance and efficient use of technology. Dublin Core metadata standards permit a full evaluation and cataloguing of Web resources appropriate to highly specific research needs and discovery. Current research points to a type of structure based on a system of faceted classification. This system allows the semantic and syntactic relationships to be defined. Controlled vocabulary, such as the Library of Congress Subject Headings, can be assigned, not in a hierarchical structure, but rather as descriptive facets of relating concepts. Web design features such as this are adding value to discovery and filtering out data that lack authority. The system design allows for scalability and extensibility, two technical features that are integral to future development of the digital library and resource discovery.
    Date
    30.12.2008 18:22:46
    Type
    a
  13. Synak, M.; Dabrowski, M.; Kruk, S.R.: Semantic Web and ontologies (2009) 0.00
    0.004750502 = product of:
      0.014251505 = sum of:
        0.003497729 = weight(_text_:a in 3376) [ClassicSimilarity], result of:
          0.003497729 = score(doc=3376,freq=2.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.10191591 = fieldWeight in 3376, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=3376)
        0.010753776 = product of:
          0.032261327 = sum of:
            0.032261327 = weight(_text_:22 in 3376) [ClassicSimilarity], result of:
              0.032261327 = score(doc=3376,freq=2.0), product of:
                0.104229875 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029764405 = queryNorm
                0.30952093 = fieldWeight in 3376, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3376)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Date
    31. 7.2010 16:58:22
    Type
    a
  14. Bechhofer, S.; Harmelen, F. van; Hendler, J.; Horrocks, I.; McGuinness, D.L.; Patel-Schneider, P.F.; Stein, L.A.: OWL Web Ontology Language Reference (2004) 0.00
    0.00474653 = product of:
      0.014239591 = sum of:
        0.006121026 = weight(_text_:a in 4684) [ClassicSimilarity], result of:
          0.006121026 = score(doc=4684,freq=8.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.17835285 = fieldWeight in 4684, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4684)
        0.008118565 = product of:
          0.03247426 = sum of:
            0.03247426 = weight(_text_:g in 4684) [ClassicSimilarity], result of:
              0.03247426 = score(doc=4684,freq=2.0), product of:
                0.11179353 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.029764405 = queryNorm
                0.29048425 = fieldWeight in 4684, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4684)
          0.25 = coord(1/4)
      0.33333334 = coord(2/6)
    
    Abstract
    The Web Ontology Language OWL is a semantic markup language for publishing and sharing ontologies on the World Wide Web. OWL is developed as a vocabulary extension of RDF (the Resource Description Framework) and is derived from the DAML+OIL Web Ontology Language. This document contains a structured informal description of the full set of OWL language constructs and is meant to serve as a reference for OWL users who want to construct OWL ontologies.
    Editor
    Dean, M. u. G. Schreiber
  15. Dunsire, G.: FRBR and the Semantic Web (2012) 0.00
    0.00474653 = product of:
      0.014239591 = sum of:
        0.006121026 = weight(_text_:a in 1928) [ClassicSimilarity], result of:
          0.006121026 = score(doc=1928,freq=8.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.17835285 = fieldWeight in 1928, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1928)
        0.008118565 = product of:
          0.03247426 = sum of:
            0.03247426 = weight(_text_:g in 1928) [ClassicSimilarity], result of:
              0.03247426 = score(doc=1928,freq=2.0), product of:
                0.11179353 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.029764405 = queryNorm
                0.29048425 = fieldWeight in 1928, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1928)
          0.25 = coord(1/4)
      0.33333334 = coord(2/6)
    
    Abstract
    Each of the FR family of models has been represented in Resource Description Framework (RDF), the basis of the Semantic Web. This has involved analysis of the entity-relationship diagrams and text of the models to identify and create the RDF classes, properties, definitions and scope notes required. The work has shown that it is possible to seamlessly connect the models within a semantic framework, specifically in the treatment of names, identifiers, and subjects, and link the RDF elements to those in related namespaces.
    Content
    Contribution to a special issue "The FRBR family of conceptual models: toward a linked future"
    Type
    a
  16. Willer, M.; Dunsire, G.: ISBD, the UNIMARC bibliographic format, and RDA : interoperability issues in namespaces and the linked data environment (2014) 0.00
    0.00474653 = product of:
      0.014239591 = sum of:
        0.006121026 = weight(_text_:a in 1999) [ClassicSimilarity], result of:
          0.006121026 = score(doc=1999,freq=8.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.17835285 = fieldWeight in 1999, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1999)
        0.008118565 = product of:
          0.03247426 = sum of:
            0.03247426 = weight(_text_:g in 1999) [ClassicSimilarity], result of:
              0.03247426 = score(doc=1999,freq=2.0), product of:
                0.11179353 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.029764405 = queryNorm
                0.29048425 = fieldWeight in 1999, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1999)
          0.25 = coord(1/4)
      0.33333334 = coord(2/6)
    
    Abstract
    The article is an updated and expanded version of a paper presented to International Federation of Library Associations and Institutions in 2013. It describes recent work involving the representation of International Standard for Bibliographic Description (ISBD) and UNIMARC (UNIversal MARC) in Resource Description Framework (RDF), the basis of the Semantic Web and linked data. The UNIMARC Bibliographic format is used to illustrate issues arising from the development of a bibliographic element set and its semantic alignment with ISBD. The article discusses the use of such alignments in the automated processing of linked data for interoperability, using examples from ISBD, UNIMARC, and Resource Description and Access.
    Footnote
    Contribution in a special issue "ISBD: The Bibliographic Content Standard "
    Type
    a
  17. Hooland, S. van; Verborgh, R.; Wilde, M. De; Hercher, J.; Mannens, E.; Wa, R.Van de: Evaluating the success of vocabulary reconciliation for cultural heritage collections (2013) 0.00
    0.004643734 = product of:
      0.013931202 = sum of:
        0.0058658705 = weight(_text_:a in 662) [ClassicSimilarity], result of:
          0.0058658705 = score(doc=662,freq=10.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.1709182 = fieldWeight in 662, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=662)
        0.008065332 = product of:
          0.024195995 = sum of:
            0.024195995 = weight(_text_:22 in 662) [ClassicSimilarity], result of:
              0.024195995 = score(doc=662,freq=2.0), product of:
                0.104229875 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029764405 = queryNorm
                0.23214069 = fieldWeight in 662, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=662)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    The concept of Linked Data has made its entrance in the cultural heritage sector due to its potential use for the integration of heterogeneous collections and deriving additional value out of existing metadata. However, practitioners and researchers alike need a better understanding of what outcome they can reasonably expect of the reconciliation process between their local metadata and established controlled vocabularies which are already a part of the Linked Data cloud. This paper offers an in-depth analysis of how a locally developed vocabulary can be successfully reconciled with the Library of Congress Subject Headings (LCSH) and the Arts and Architecture Thesaurus (AAT) through the help of a general-purpose tool for interactive data transformation (OpenRefine). Issues negatively affecting the reconciliation process are identified and solutions are proposed in order to derive maximum value from existing metadata and controlled vocabularies in an automated manner.
    Date
    22. 3.2013 19:29:20
    Type
    a
  18. Liang, A.; Salokhe, G.; Sini, M.; Keizer, J.: Towards an infrastructure for semantic applications : methodologies for semantic integration of heterogeneous resources (2006) 0.00
    0.0046331207 = product of:
      0.013899362 = sum of:
        0.0069405916 = weight(_text_:a in 241) [ClassicSimilarity], result of:
          0.0069405916 = score(doc=241,freq=14.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.20223314 = fieldWeight in 241, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=241)
        0.0069587696 = product of:
          0.027835079 = sum of:
            0.027835079 = weight(_text_:g in 241) [ClassicSimilarity], result of:
              0.027835079 = score(doc=241,freq=2.0), product of:
                0.11179353 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.029764405 = queryNorm
                0.24898648 = fieldWeight in 241, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.046875 = fieldNorm(doc=241)
          0.25 = coord(1/4)
      0.33333334 = coord(2/6)
    
    Abstract
    The semantic heterogeneity presented by Web information in the Agricultural domain presents tremendous information retrieval challenges. This article presents work taking place at the Food and Agriculture Organizations (FAO) which addresses this challenge. Based on the analysis of resources in the domain of agriculture, this paper proposes (a) an application profile (AP) for dealing with the problem of heterogeneity originating from differences in terminologies, domain coverage, and domain modelling, and (b) a root application ontology (AAO) based on the application profile which can serve as a basis for extending knowledge of the domain. The paper explains how even a small investment in the enhancement of relations between vocabularies, both metadata and domain-specific, yields a relatively large return on investment.
    Type
    a
  19. Schneider, R.: Web 3.0 ante portas? : Integration von Social Web und Semantic Web (2008) 0.00
    0.004579258 = product of:
      0.0137377735 = sum of:
        0.004328219 = weight(_text_:a in 4184) [ClassicSimilarity], result of:
          0.004328219 = score(doc=4184,freq=4.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.12611452 = fieldWeight in 4184, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4184)
        0.009409554 = product of:
          0.028228661 = sum of:
            0.028228661 = weight(_text_:22 in 4184) [ClassicSimilarity], result of:
              0.028228661 = score(doc=4184,freq=2.0), product of:
                0.104229875 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029764405 = queryNorm
                0.2708308 = fieldWeight in 4184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4184)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Date
    22. 1.2011 10:38:28
    Source
    Kommunikation, Partizipation und Wirkungen im Social Web, Band 1. Hrsg.: A. Zerfaß u.a
    Type
    a
  20. Cali, A.: Ontology querying : datalog strikes back (2017) 0.00
    0.0044615027 = product of:
      0.013384508 = sum of:
        0.0064257383 = weight(_text_:a in 3928) [ClassicSimilarity], result of:
          0.0064257383 = score(doc=3928,freq=12.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.18723148 = fieldWeight in 3928, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3928)
        0.0069587696 = product of:
          0.027835079 = sum of:
            0.027835079 = weight(_text_:g in 3928) [ClassicSimilarity], result of:
              0.027835079 = score(doc=3928,freq=2.0), product of:
                0.11179353 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.029764405 = queryNorm
                0.24898648 = fieldWeight in 3928, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3928)
          0.25 = coord(1/4)
      0.33333334 = coord(2/6)
    
    Abstract
    In this tutorial we address the problem of ontology querying, that is, the problem of answering queries against a theory constituted by facts (the data) and inference rules (the ontology). A varied landscape of ontology languages exists in the scientific literature, with several degrees of complexity of query processing. We argue that Datalog±, a family of languages derived from Datalog, is a powerful tool for ontology querying. To illustrate the impact of this comeback of Datalog, we present the basic paradigms behind the main Datalog± as well as some recent extensions. We also present some efficient query processing techniques for some cases.
    Source
    Reasoning Web: Semantic Interoperability on the Web, 13th International Summer School 2017, London, UK, July 7-11, 2017, Tutorial Lectures. Eds.: Ianni, G. et al
    Type
    a

Years

Languages

  • e 242
  • d 69
  • f 1
  • More… Less…

Types

  • a 213
  • el 81
  • m 43
  • s 17
  • n 10
  • x 6
  • r 2
  • More… Less…

Subjects

Classifications