Search (327 results, page 1 of 17)

  • × theme_ss:"Semantic Web"
  1. Faaborg, A.; Lagoze, C.: Semantic browsing (2003) 0.14
    0.13870208 = product of:
      0.1849361 = sum of:
        0.012739806 = weight(_text_:a in 1026) [ClassicSimilarity], result of:
          0.012739806 = score(doc=1026,freq=12.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.21843673 = fieldWeight in 1026, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1026)
        0.11216935 = weight(_text_:70 in 1026) [ClassicSimilarity], result of:
          0.11216935 = score(doc=1026,freq=2.0), product of:
            0.27085114 = queryWeight, product of:
              5.354766 = idf(docFreq=567, maxDocs=44218)
              0.05058132 = queryNorm
            0.41413653 = fieldWeight in 1026, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.354766 = idf(docFreq=567, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1026)
        0.06002696 = sum of:
          0.012055466 = weight(_text_:information in 1026) [ClassicSimilarity], result of:
            0.012055466 = score(doc=1026,freq=2.0), product of:
              0.088794395 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.05058132 = queryNorm
              0.13576832 = fieldWeight in 1026, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1026)
          0.047971494 = weight(_text_:22 in 1026) [ClassicSimilarity], result of:
            0.047971494 = score(doc=1026,freq=2.0), product of:
              0.17712717 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05058132 = queryNorm
              0.2708308 = fieldWeight in 1026, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1026)
      0.75 = coord(3/4)
    
    Abstract
    We have created software applications that allow users to both author and use Semantic Web metadata. To create and use a layer of semantic content on top of the existing Web, we have (1) implemented a user interface that expedites the task of attributing metadata to resources on the Web, and (2) augmented a Web browser to leverage this semantic metadata to provide relevant information and tasks to the user. This project provides a framework for annotating and reorganizing existing files, pages, and sites on the Web that is similar to Vannevar Bushrsquos original concepts of trail blazing and associative indexing.
    Pages
    S.70-81
    Source
    Research and advanced technology for digital libraries : 7th European Conference, proceedings / ECDL 2003, Trondheim, Norway, August 17-22, 2003
    Type
    a
  2. Oliveira Machado, L.M.; Souza, R.R.; Simões, M. da Graça: Semantic web or web of data? : a diachronic study (1999 to 2017) of the publications of Tim Berners-Lee and the World Wide Web Consortium (2019) 0.07
    0.07356448 = product of:
      0.09808597 = sum of:
        0.010507616 = weight(_text_:a in 5300) [ClassicSimilarity], result of:
          0.010507616 = score(doc=5300,freq=16.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.18016359 = fieldWeight in 5300, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5300)
        0.08012097 = weight(_text_:70 in 5300) [ClassicSimilarity], result of:
          0.08012097 = score(doc=5300,freq=2.0), product of:
            0.27085114 = queryWeight, product of:
              5.354766 = idf(docFreq=567, maxDocs=44218)
              0.05058132 = queryNorm
            0.29581183 = fieldWeight in 5300, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.354766 = idf(docFreq=567, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5300)
        0.0074573862 = product of:
          0.0149147725 = sum of:
            0.0149147725 = weight(_text_:information in 5300) [ClassicSimilarity], result of:
              0.0149147725 = score(doc=5300,freq=6.0), product of:
                0.088794395 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.05058132 = queryNorm
                0.16796975 = fieldWeight in 5300, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5300)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The web has been, in the last decades, the place where information retrieval achieved its maximum importance, given its ubiquity and the sheer volume of information. However, its exponential growth made the retrieval task increasingly hard, relying in its effectiveness on idiosyncratic and somewhat biased ranking algorithms. To deal with this problem, a "new" web, called the Semantic Web (SW), was proposed, bringing along concepts like "Web of Data" and "Linked Data," although the definitions and connections among these concepts are often unclear. Based on a qualitative approach built over a literature review, a definition of SW is presented, discussing the related concepts sometimes used as synonyms. It concludes that the SW is a comprehensive and ambitious construct that includes the great purpose of making the web a global database. It also follows the specifications developed and/or associated with its operationalization and the necessary procedures for the connection of data in an open format on the web. The goals of this comprehensive SW are the union of two outcomes still tenuously connected: the virtually unlimited possibility of connections between data-the web domain-with the potentiality of the automated inference of "intelligent" systems-the semantic component.
    Source
    Journal of the Association for Information Science and Technology. 70(2019) no.7, S.701-714
    Type
    a
  3. Fensel, A.: Towards semantic APIs for research data services (2017) 0.06
    0.05976234 = product of:
      0.11952468 = sum of:
        0.0073553314 = weight(_text_:a in 4439) [ClassicSimilarity], result of:
          0.0073553314 = score(doc=4439,freq=4.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.12611452 = fieldWeight in 4439, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4439)
        0.11216935 = weight(_text_:70 in 4439) [ClassicSimilarity], result of:
          0.11216935 = score(doc=4439,freq=2.0), product of:
            0.27085114 = queryWeight, product of:
              5.354766 = idf(docFreq=567, maxDocs=44218)
              0.05058132 = queryNorm
            0.41413653 = fieldWeight in 4439, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.354766 = idf(docFreq=567, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4439)
      0.5 = coord(2/4)
    
    Source
    Mitteilungen der Vereinigung Österreichischer Bibliothekarinnen und Bibliothekare. 70(2017) H.2, S.157-169
    Type
    a
  4. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.06
    0.057375073 = product of:
      0.076500095 = sum of:
        0.0535577 = product of:
          0.1606731 = sum of:
            0.1606731 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.1606731 = score(doc=701,freq=2.0), product of:
                0.428829 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05058132 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.012609138 = weight(_text_:a in 701) [ClassicSimilarity], result of:
          0.012609138 = score(doc=701,freq=36.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.2161963 = fieldWeight in 701, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.010333256 = product of:
          0.020666512 = sum of:
            0.020666512 = weight(_text_:information in 701) [ClassicSimilarity], result of:
              0.020666512 = score(doc=701,freq=18.0), product of:
                0.088794395 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.05058132 = queryNorm
                0.23274568 = fieldWeight in 701, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  5. Sakr, S.; Wylot, M.; Mutharaju, R.; Le-Phuoc, D.; Fundulaki, I.: Linked data : storing, querying, and reasoning (2018) 0.06
    0.05558668 = product of:
      0.074115574 = sum of:
        0.0051476597 = weight(_text_:a in 5329) [ClassicSimilarity], result of:
          0.0051476597 = score(doc=5329,freq=6.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.088261776 = fieldWeight in 5329, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=5329)
        0.06409677 = weight(_text_:70 in 5329) [ClassicSimilarity], result of:
          0.06409677 = score(doc=5329,freq=2.0), product of:
            0.27085114 = queryWeight, product of:
              5.354766 = idf(docFreq=567, maxDocs=44218)
              0.05058132 = queryNorm
            0.23664945 = fieldWeight in 5329, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.354766 = idf(docFreq=567, maxDocs=44218)
              0.03125 = fieldNorm(doc=5329)
        0.0048711435 = product of:
          0.009742287 = sum of:
            0.009742287 = weight(_text_:information in 5329) [ClassicSimilarity], result of:
              0.009742287 = score(doc=5329,freq=4.0), product of:
                0.088794395 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.05058132 = queryNorm
                0.10971737 = fieldWeight in 5329, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5329)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    This book describes efficient and effective techniques for harnessing the power of Linked Data by tackling the various aspects of managing its growing volume: storing, querying, reasoning, provenance management and benchmarking. To this end, Chapter 1 introduces the main concepts of the Semantic Web and Linked Data and provides a roadmap for the book. Next, Chapter 2 briefly presents the basic concepts underpinning Linked Data technologies that are discussed in the book. Chapter 3 then offers an overview of various techniques and systems for centrally querying RDF datasets, and Chapter 4 outlines various techniques and systems for efficiently querying large RDF datasets in distributed environments. Subsequently, Chapter 5 explores how streaming requirements are addressed in current, state-of-the-art RDF stream data processing. Chapter 6 covers performance and scaling issues of distributed RDF reasoning systems, while Chapter 7 details benchmarks for RDF query engines and instance matching systems. Chapter 8 addresses the provenance management for Linked Data and presents the different provenance models developed. Lastly, Chapter 9 offers a brief summary, highlighting and providing insights into some of the open challenges and research directions. Providing an updated overview of methods, technologies and systems related to Linked Data this book is mainly intended for students and researchers who are interested in the Linked Data domain. It enables students to gain an understanding of the foundations and underpinning technologies and standards for Linked Data, while researchers benefit from the in-depth coverage of the emerging and ongoing advances in Linked Data storing, querying, reasoning, and provenance management systems. Further, it serves as a starting point to tackle the next research challenges in the domain of Linked Data management.
    Footnote
    Rez. in: JASIST 70(2019) no.8, S.905-907 (Dean Allemang).
    LCSH
    Information storage and retrieval
    Subject
    Information storage and retrieval
  6. Papadakis, I. et al.: Highlighting timely information in libraries through social and semantic Web technologies (2016) 0.05
    0.050158218 = product of:
      0.100316435 = sum of:
        0.007430006 = weight(_text_:a in 2090) [ClassicSimilarity], result of:
          0.007430006 = score(doc=2090,freq=2.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.12739488 = fieldWeight in 2090, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=2090)
        0.092886426 = sum of:
          0.024355719 = weight(_text_:information in 2090) [ClassicSimilarity], result of:
            0.024355719 = score(doc=2090,freq=4.0), product of:
              0.088794395 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.05058132 = queryNorm
              0.27429342 = fieldWeight in 2090, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.078125 = fieldNorm(doc=2090)
          0.06853071 = weight(_text_:22 in 2090) [ClassicSimilarity], result of:
            0.06853071 = score(doc=2090,freq=2.0), product of:
              0.17712717 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05058132 = queryNorm
              0.38690117 = fieldWeight in 2090, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=2090)
      0.5 = coord(2/4)
    
    Series
    Communications in computer and information science; 672
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
    Type
    a
  7. Neubauer, G.: Visualization of typed links in linked data (2017) 0.04
    0.041917987 = product of:
      0.083835974 = sum of:
        0.003715003 = weight(_text_:a in 3912) [ClassicSimilarity], result of:
          0.003715003 = score(doc=3912,freq=2.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.06369744 = fieldWeight in 3912, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3912)
        0.08012097 = weight(_text_:70 in 3912) [ClassicSimilarity], result of:
          0.08012097 = score(doc=3912,freq=2.0), product of:
            0.27085114 = queryWeight, product of:
              5.354766 = idf(docFreq=567, maxDocs=44218)
              0.05058132 = queryNorm
            0.29581183 = fieldWeight in 3912, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.354766 = idf(docFreq=567, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3912)
      0.5 = coord(2/4)
    
    Source
    Mitteilungen der Vereinigung Österreichischer Bibliothekarinnen und Bibliothekare. 70(2017) H.2, S.179-199
    Type
    a
  8. Heflin, J.; Hendler, J.: Semantic interoperability on the Web (2000) 0.04
    0.035214484 = product of:
      0.07042897 = sum of:
        0.010402009 = weight(_text_:a in 759) [ClassicSimilarity], result of:
          0.010402009 = score(doc=759,freq=8.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.17835285 = fieldWeight in 759, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=759)
        0.06002696 = sum of:
          0.012055466 = weight(_text_:information in 759) [ClassicSimilarity], result of:
            0.012055466 = score(doc=759,freq=2.0), product of:
              0.088794395 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.05058132 = queryNorm
              0.13576832 = fieldWeight in 759, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0546875 = fieldNorm(doc=759)
          0.047971494 = weight(_text_:22 in 759) [ClassicSimilarity], result of:
            0.047971494 = score(doc=759,freq=2.0), product of:
              0.17712717 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05058132 = queryNorm
              0.2708308 = fieldWeight in 759, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=759)
      0.5 = coord(2/4)
    
    Abstract
    XML will have a profound impact on the way data is exchanged on the Internet. An important feature of this language is the separation of content from presentation, which makes it easier to select and/or reformat the data. However, due to the likelihood of numerous industry and domain specific DTDs, those who wish to integrate information will still be faced with the problem of semantic interoperability. In this paper we discuss why this problem is not solved by XML, and then discuss why the Resource Description Framework is only a partial solution. We then present the SHOE language, which we feel has many of the features necessary to enable a semantic web, and describe an existing set of tools that make it easy to use the language.
    Date
    11. 5.2013 19:22:18
    Type
    a
  9. Keyser, P. de: Indexing : from thesauri to the Semantic Web (2012) 0.03
    0.031737078 = product of:
      0.063474156 = sum of:
        0.0044580037 = weight(_text_:a in 3197) [ClassicSimilarity], result of:
          0.0044580037 = score(doc=3197,freq=2.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.07643694 = fieldWeight in 3197, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3197)
        0.05901615 = sum of:
          0.017897725 = weight(_text_:information in 3197) [ClassicSimilarity], result of:
            0.017897725 = score(doc=3197,freq=6.0), product of:
              0.088794395 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.05058132 = queryNorm
              0.20156369 = fieldWeight in 3197, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046875 = fieldNorm(doc=3197)
          0.041118424 = weight(_text_:22 in 3197) [ClassicSimilarity], result of:
            0.041118424 = score(doc=3197,freq=2.0), product of:
              0.17712717 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05058132 = queryNorm
              0.23214069 = fieldWeight in 3197, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3197)
      0.5 = coord(2/4)
    
    Abstract
    Indexing consists of both novel and more traditional techniques. Cutting-edge indexing techniques, such as automatic indexing, ontologies, and topic maps, were developed independently of older techniques such as thesauri, but it is now recognized that these older methods also hold expertise. Indexing describes various traditional and novel indexing techniques, giving information professionals and students of library and information sciences a broad and comprehensible introduction to indexing. This title consists of twelve chapters: an Introduction to subject readings and theasauri; Automatic indexing versus manual indexing; Techniques applied in automatic indexing of text material; Automatic indexing of images; The black art of indexing moving images; Automatic indexing of music; Taxonomies and ontologies; Metadata formats and indexing; Tagging; Topic maps; Indexing the web; and The Semantic Web.
    Date
    24. 8.2016 14:03:22
    Series
    Chandos information professional series
  10. Franklin, R.A.: Re-inventing subject access for the semantic web (2003) 0.03
    0.031185757 = product of:
      0.062371515 = sum of:
        0.010919834 = weight(_text_:a in 2556) [ClassicSimilarity], result of:
          0.010919834 = score(doc=2556,freq=12.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.18723148 = fieldWeight in 2556, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2556)
        0.051451683 = sum of:
          0.010333257 = weight(_text_:information in 2556) [ClassicSimilarity], result of:
            0.010333257 = score(doc=2556,freq=2.0), product of:
              0.088794395 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.05058132 = queryNorm
              0.116372846 = fieldWeight in 2556, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046875 = fieldNorm(doc=2556)
          0.041118424 = weight(_text_:22 in 2556) [ClassicSimilarity], result of:
            0.041118424 = score(doc=2556,freq=2.0), product of:
              0.17712717 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05058132 = queryNorm
              0.23214069 = fieldWeight in 2556, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2556)
      0.5 = coord(2/4)
    
    Abstract
    First generation scholarly research on the Web lacked a firm system of authority control. Second generation Web research is beginning to model subject access with library science principles of bibliographic control and cataloguing. Harnessing the Web and organising the intellectual content with standards and controlled vocabulary provides precise search and retrieval capability, increasing relevance and efficient use of technology. Dublin Core metadata standards permit a full evaluation and cataloguing of Web resources appropriate to highly specific research needs and discovery. Current research points to a type of structure based on a system of faceted classification. This system allows the semantic and syntactic relationships to be defined. Controlled vocabulary, such as the Library of Congress Subject Headings, can be assigned, not in a hierarchical structure, but rather as descriptive facets of relating concepts. Web design features such as this are adding value to discovery and filtering out data that lack authority. The system design allows for scalability and extensibility, two technical features that are integral to future development of the digital library and resource discovery.
    Date
    30.12.2008 18:22:46
    Source
    Online information review. 27(2003) no.2, S.94-101
    Type
    a
  11. Hooland, S. van; Verborgh, R.; Wilde, M. De; Hercher, J.; Mannens, E.; Wa, R.Van de: Evaluating the success of vocabulary reconciliation for cultural heritage collections (2013) 0.03
    0.030710042 = product of:
      0.061420083 = sum of:
        0.0099684 = weight(_text_:a in 662) [ClassicSimilarity], result of:
          0.0099684 = score(doc=662,freq=10.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.1709182 = fieldWeight in 662, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=662)
        0.051451683 = sum of:
          0.010333257 = weight(_text_:information in 662) [ClassicSimilarity], result of:
            0.010333257 = score(doc=662,freq=2.0), product of:
              0.088794395 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.05058132 = queryNorm
              0.116372846 = fieldWeight in 662, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046875 = fieldNorm(doc=662)
          0.041118424 = weight(_text_:22 in 662) [ClassicSimilarity], result of:
            0.041118424 = score(doc=662,freq=2.0), product of:
              0.17712717 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05058132 = queryNorm
              0.23214069 = fieldWeight in 662, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=662)
      0.5 = coord(2/4)
    
    Abstract
    The concept of Linked Data has made its entrance in the cultural heritage sector due to its potential use for the integration of heterogeneous collections and deriving additional value out of existing metadata. However, practitioners and researchers alike need a better understanding of what outcome they can reasonably expect of the reconciliation process between their local metadata and established controlled vocabularies which are already a part of the Linked Data cloud. This paper offers an in-depth analysis of how a locally developed vocabulary can be successfully reconciled with the Library of Congress Subject Headings (LCSH) and the Arts and Architecture Thesaurus (AAT) through the help of a general-purpose tool for interactive data transformation (OpenRefine). Issues negatively affecting the reconciliation process are identified and solutions are proposed in order to derive maximum value from existing metadata and controlled vocabularies in an automated manner.
    Date
    22. 3.2013 19:29:20
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.3, S.464-479
    Type
    a
  12. Prud'hommeaux, E.; Gayo, E.: RDF ventures to boldly meet your most pedestrian needs (2015) 0.03
    0.029586587 = product of:
      0.059173174 = sum of:
        0.0077214893 = weight(_text_:a in 2024) [ClassicSimilarity], result of:
          0.0077214893 = score(doc=2024,freq=6.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.13239266 = fieldWeight in 2024, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2024)
        0.051451683 = sum of:
          0.010333257 = weight(_text_:information in 2024) [ClassicSimilarity], result of:
            0.010333257 = score(doc=2024,freq=2.0), product of:
              0.088794395 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.05058132 = queryNorm
              0.116372846 = fieldWeight in 2024, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046875 = fieldNorm(doc=2024)
          0.041118424 = weight(_text_:22 in 2024) [ClassicSimilarity], result of:
            0.041118424 = score(doc=2024,freq=2.0), product of:
              0.17712717 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05058132 = queryNorm
              0.23214069 = fieldWeight in 2024, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2024)
      0.5 = coord(2/4)
    
    Abstract
    Defined in 1999 and paired with XML, the Resource Description Framework (RDF) has been cast as an RDF Schema, producing data that is well-structured but not validated, permitting certain illogical relationships. When stakeholders convened in 2014 to consider solutions to the data validation challenge, a W3C working group proposed Resource Shapes and Shape Expressions to describe the properties expected for an RDF node. Resistance rose from concerns about data and schema reuse, key principles in RDF. Ideally data types and properties are designed for broad use, but they are increasingly adopted with local restrictions for specific purposes. Resource Shapes are commonly treated as record classes, standing in for data structures but losing flexibility for later reuse. Of various solutions to the resulting tensions, the concept of record classes may be the most reasonable basis for agreement, satisfying stakeholders' objectives while allowing for variations with constraints.
    Footnote
    Contribution to a special section "Linked data and the charm of weak semantics".
    Source
    Bulletin of the Association for Information Science and Technology. 41(2015) no.4, S.18-22
    Type
    a
  13. Metadata and semantics research : 7th Research Conference, MTSR 2013 Thessaloniki, Greece, November 19-22, 2013. Proceedings (2013) 0.03
    0.027377361 = product of:
      0.054754723 = sum of:
        0.0073553314 = weight(_text_:a in 1155) [ClassicSimilarity], result of:
          0.0073553314 = score(doc=1155,freq=16.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.12611452 = fieldWeight in 1155, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1155)
        0.04739939 = sum of:
          0.013478421 = weight(_text_:information in 1155) [ClassicSimilarity], result of:
            0.013478421 = score(doc=1155,freq=10.0), product of:
              0.088794395 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.05058132 = queryNorm
              0.1517936 = fieldWeight in 1155, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1155)
          0.03392097 = weight(_text_:22 in 1155) [ClassicSimilarity], result of:
            0.03392097 = score(doc=1155,freq=4.0), product of:
              0.17712717 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05058132 = queryNorm
              0.19150631 = fieldWeight in 1155, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1155)
      0.5 = coord(2/4)
    
    Abstract
    Metadata and semantics are integral to any information system and significant to the sphere of Web data. Research focusing on metadata and semantics is crucial for advancing our understanding and knowledge of metadata; and, more profoundly for being able to effectively discover, use, archive, and repurpose information. In response to this need, researchers are actively examining methods for generating, reusing, and interchanging metadata. Integrated with these developments is research on the application of computational methods, linked data, and data analytics. A growing body of work also targets conceptual and theoretical designs providing foundational frameworks for metadata and semantic applications. There is no doubt that metadata weaves its way into nearly every aspect of our information ecosystem, and there is great motivation for advancing the current state of metadata and semantics. To this end, it is vital that scholars and practitioners convene and share their work.
    The MTSR 2013 program and the contents of these proceedings show a rich diversity of research and practices, drawing on problems from metadata and semantically focused tools and technologies, linked data, cross-language semantics, ontologies, metadata models, and semantic system and metadata standards. The general session of the conference included 18 papers covering a broad spectrum of topics, proving the interdisciplinary field of metadata, and was divided into three main themes: platforms for research data sets, system architecture and data management; metadata and ontology validation, evaluation, mapping and interoperability; and content management. Metadata as a research topic is maturing, and the conference also supported the following five tracks: Metadata and Semantics for Open Repositories, Research Information Systems and Data Infrastructures; Metadata and Semantics for Cultural Collections and Applications; Metadata and Semantics for Agriculture, Food and Environment; Big Data and Digital Libraries in Health, Science and Technology; and European and National Projects, and Project Networking. Each track had a rich selection of papers, giving broader diversity to MTSR, and enabling deeper exploration of significant topics.
    All the papers underwent a thorough and rigorous peer-review process. The review and selection this year was highly competitive and only papers containing significant research results, innovative methods, or novel and best practices were accepted for publication. Only 29 of 89 submissions were accepted as full papers, representing 32.5% of the total number of submissions. Additional contributions covering noteworthy and important results in special tracks or project reports were accepted, totaling 42 accepted contributions. This year's conference included two outstanding keynote speakers. Dr. Stefan Gradmann, a professor arts department of KU Leuven (Belgium) and director of university library, addressed semantic research drawing from his work with Europeana. The title of his presentation was, "Towards a Semantic Research Library: Digital Humanities Research, Europeana and the Linked Data Paradigm". Dr. Michail Salampasis, associate professor from our conference host institution, the Department of Informatics of the Alexander TEI of Thessaloniki, presented new potential, intersecting search and linked data. The title of his talk was, "Rethinking the Search Experience: What Could Professional Search Systems Do Better?"
    Date
    17.12.2013 12:51:22
    Series
    Communications in computer and information science; vol.390
  14. Semantic web & linked data : Elemente zukünftiger Informationsinfrastrukturen ; 1. DGI-Konferenz ; 62. Jahrestagung der DGI ; Frankfurt am Main, 7. - 9. Oktober 2010 ; Proceedings / Deutsche Gesellschaft für Informationswissenschaft und Informationspraxis (2010) 0.03
    0.026273506 = product of:
      0.05254701 = sum of:
        0.04807258 = weight(_text_:70 in 1516) [ClassicSimilarity], result of:
          0.04807258 = score(doc=1516,freq=2.0), product of:
            0.27085114 = queryWeight, product of:
              5.354766 = idf(docFreq=567, maxDocs=44218)
              0.05058132 = queryNorm
            0.17748709 = fieldWeight in 1516, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.354766 = idf(docFreq=567, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1516)
        0.0044744313 = product of:
          0.008948863 = sum of:
            0.008948863 = weight(_text_:information in 1516) [ClassicSimilarity], result of:
              0.008948863 = score(doc=1516,freq=6.0), product of:
                0.088794395 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.05058132 = queryNorm
                0.10078184 = fieldWeight in 1516, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1516)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    LINKED DATA IM GEOINFORMATIONSBEREICH - CHANCEN ODER GEFAHR? Geodaten - von der Verantwortung des Dealers / Karsten Neumann - Computergestützte Freizeitplanung basierend auf Points Of Interest / Peter Bäcker und Ugur Macit VON LINKED DATA ZU VERLINKTEN DIALOGEN Die globalisierte Semantic Web Informationswissenschaftlerin / Dierk Eichel - Kommunikation und Kontext. Überlegungen zur Entwicklung virtueller Diskursräume für die Wissenschaft / Ben Kaden und Maxi Kindling - Konzeptstudie: Die informationswissenschaftliche Zeitschrift der Zukunft / Lambert Heller und Heinz Pampel SEMANTIC WEB & LINKED DATA IM BILDUNGSWESEN Einsatz von Semantic Web-Technologien am Informationszentrum Bildung / Carola Carstens und Marc Rittberger - Bedarfsgerecht, kontextbezogen, qualitätsgesichert: Von der Information zum Wertschöpfungsfaktor Wissen am Beispiel einer Wissenslandkarte als dynamisches System zur Repräsentation des Wissens in der Berufsbildungsforschung / Sandra Dücker und Markus Linten - Virtuelle Forschungsumgebungen und Forschungsdaten für Lehre und Forschung: Informationsinfrastrukturen für die (Natur-)Wissenschaften / Matthias Schulze
    Isbn
    978-3-925474-70-5
    RSWK
    Semantic Web / Indexierung <Inhaltserschließung> / Information Retrieval / Kongress / Frankfurt <Main, 2010>
    Subject
    Semantic Web / Indexierung <Inhaltserschließung> / Information Retrieval / Kongress / Frankfurt <Main, 2010>
  15. Brunetti, J.M.; Roberto García, R.: User-centered design and evaluation of overview components for semantic data exploration (2014) 0.03
    0.026074799 = product of:
      0.052149598 = sum of:
        0.00786318 = weight(_text_:a in 1626) [ClassicSimilarity], result of:
          0.00786318 = score(doc=1626,freq=14.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.13482209 = fieldWeight in 1626, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=1626)
        0.04428642 = sum of:
          0.016874136 = weight(_text_:information in 1626) [ClassicSimilarity], result of:
            0.016874136 = score(doc=1626,freq=12.0), product of:
              0.088794395 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.05058132 = queryNorm
              0.19003606 = fieldWeight in 1626, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.03125 = fieldNorm(doc=1626)
          0.027412282 = weight(_text_:22 in 1626) [ClassicSimilarity], result of:
            0.027412282 = score(doc=1626,freq=2.0), product of:
              0.17712717 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05058132 = queryNorm
              0.15476047 = fieldWeight in 1626, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1626)
      0.5 = coord(2/4)
    
    Abstract
    Purpose - The growing volumes of semantic data available in the web result in the need for handling the information overload phenomenon. The potential of this amount of data is enormous but in most cases it is very difficult for users to visualize, explore and use this data, especially for lay-users without experience with Semantic Web technologies. The paper aims to discuss these issues. Design/methodology/approach - The Visual Information-Seeking Mantra "Overview first, zoom and filter, then details-on-demand" proposed by Shneiderman describes how data should be presented in different stages to achieve an effective exploration. The overview is the first user task when dealing with a data set. The objective is that the user is capable of getting an idea about the overall structure of the data set. Different information architecture (IA) components supporting the overview tasks have been developed, so they are automatically generated from semantic data, and evaluated with end-users. Findings - The chosen IA components are well known to web users, as they are present in most web pages: navigation bars, site maps and site indexes. The authors complement them with Treemaps, a visualization technique for displaying hierarchical data. These components have been developed following an iterative User-Centered Design methodology. Evaluations with end-users have shown that they get easily used to them despite the fact that they are generated automatically from structured data, without requiring knowledge about the underlying semantic technologies, and that the different overview components complement each other as they focus on different information search needs. Originality/value - Obtaining semantic data sets overviews cannot be easily done with the current semantic web browsers. Overviews become difficult to achieve with large heterogeneous data sets, which is typical in the Semantic Web, because traditional IA techniques do not easily scale to large data sets. There is little or no support to obtain overview information quickly and easily at the beginning of the exploration of a new data set. This can be a serious limitation when exploring a data set for the first time, especially for lay-users. The proposal is to reuse and adapt existing IA components to provide this overview to users and show that they can be generated automatically from the thesaurus and ontologies that structure semantic data while providing a comparable user experience to traditional web sites.
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 66(2014) no.5, S.519-536
    Type
    a
  16. Shoffner, M.; Greenberg, J.; Kramer-Duffield, J.; Woodbury, D.: Web 2.0 semantic systems : collaborative learning in science (2008) 0.03
    0.025054615 = product of:
      0.05010923 = sum of:
        0.012954659 = weight(_text_:a in 2661) [ClassicSimilarity], result of:
          0.012954659 = score(doc=2661,freq=38.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.22212058 = fieldWeight in 2661, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=2661)
        0.03715457 = sum of:
          0.009742287 = weight(_text_:information in 2661) [ClassicSimilarity], result of:
            0.009742287 = score(doc=2661,freq=4.0), product of:
              0.088794395 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.05058132 = queryNorm
              0.10971737 = fieldWeight in 2661, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.03125 = fieldNorm(doc=2661)
          0.027412282 = weight(_text_:22 in 2661) [ClassicSimilarity], result of:
            0.027412282 = score(doc=2661,freq=2.0), product of:
              0.17712717 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05058132 = queryNorm
              0.15476047 = fieldWeight in 2661, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2661)
      0.5 = coord(2/4)
    
    Abstract
    The basic goal of education within a discipline is to transform a novice into an expert. This entails moving the novice toward the "semantic space" that the expert inhabits-the space of concepts, meanings, vocabularies, and other intellectual constructs that comprise the discipline. Metadata is significant to this goal in digitally mediated education environments. Encoding the experts' semantic space not only enables the sharing of semantics among discipline scientists, but also creates an environment that bridges the semantic gap between the common vocabulary of the novice and the granular descriptive language of the seasoned scientist (Greenberg, et al, 2005). Developments underlying the Semantic Web, where vocabularies are formalized in the Web Ontology Language (OWL), and Web 2.0 approaches of user-generated folksonomies provide an infrastructure for linking vocabulary systems and promoting group learning via metadata literacy. Group learning is a pedagogical approach to teaching that harnesses the phenomenon of "collective intelligence" to increase learning by means of collaboration. Learning a new semantic system can be daunting for a novice, and yet it is integral to advance one's knowledge in a discipline and retain interest. These ideas are key to the "BOT 2.0: Botany through Web 2.0, the Memex and Social Learning" project (Bot 2.0).72 Bot 2.0 is a collaboration involving the North Carolina Botanical Garden, the UNC SILS Metadata Research center, and the Renaissance Computing Institute (RENCI). Bot 2.0 presents a curriculum utilizing a memex as a way for students to link and share digital information, working asynchronously in an environment beyond the traditional classroom. Our conception of a memex is not a centralized black box but rather a flexible, distributed framework that uses the most salient and easiest-to-use collaborative platforms (e.g., Facebook, Flickr, wiki and blog technology) for personal information management. By meeting students "where they live" digitally, we hope to attract students to the study of botanical science. A key aspect is to teach students scientific terminology and about the value of metadata, an inherent function in several of the technologies and in the instructional approach we are utilizing. This poster will report on a study examining the value of both folksonomies and taxonomies for post-secondary college students learning plant identification. Our data is drawn from a curriculum involving a virtual independent learning portion and a "BotCamp" weekend at UNC, where students work with digital plan specimens that they have captured. Results provide some insight into the importance of collaboration and shared vocabulary for gaining confidence and for student progression from novice to expert in botany.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
    Type
    a
  17. Zhitomirsky-Geffet, M.; Bar-Ilan, J.: Towards maximal unification of semantically diverse ontologies for controversial domains (2014) 0.02
    0.024137393 = product of:
      0.048274785 = sum of:
        0.011120215 = weight(_text_:a in 1634) [ClassicSimilarity], result of:
          0.011120215 = score(doc=1634,freq=28.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.19066721 = fieldWeight in 1634, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=1634)
        0.03715457 = sum of:
          0.009742287 = weight(_text_:information in 1634) [ClassicSimilarity], result of:
            0.009742287 = score(doc=1634,freq=4.0), product of:
              0.088794395 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.05058132 = queryNorm
              0.10971737 = fieldWeight in 1634, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
          0.027412282 = weight(_text_:22 in 1634) [ClassicSimilarity], result of:
            0.027412282 = score(doc=1634,freq=2.0), product of:
              0.17712717 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05058132 = queryNorm
              0.15476047 = fieldWeight in 1634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
      0.5 = coord(2/4)
    
    Abstract
    Purpose - Ontologies are prone to wide semantic variability due to subjective points of view of their composers. The purpose of this paper is to propose a new approach for maximal unification of diverse ontologies for controversial domains by their relations. Design/methodology/approach - Effective matching or unification of multiple ontologies for a specific domain is crucial for the success of many semantic web applications, such as semantic information retrieval and organization, document tagging, summarization and search. To this end, numerous automatic and semi-automatic techniques were proposed in the past decade that attempt to identify similar entities, mostly classes, in diverse ontologies for similar domains. Apparently, matching individual entities cannot result in full integration of ontologies' semantics without matching their inter-relations with all other-related classes (and instances). However, semantic matching of ontological relations still constitutes a major research challenge. Therefore, in this paper the authors propose a new paradigm for assessment of maximal possible matching and unification of ontological relations. To this end, several unification rules for ontological relations were devised based on ontological reference rules, and lexical and textual entailment. These rules were semi-automatically implemented to extend a given ontology with semantically matching relations from another ontology for a similar domain. Then, the ontologies were unified through these similar pairs of relations. The authors observe that these rules can be also facilitated to reveal the contradictory relations in different ontologies. Findings - To assess the feasibility of the approach two experiments were conducted with different sets of multiple personal ontologies on controversial domains constructed by trained subjects. The results for about 50 distinct ontology pairs demonstrate a good potential of the methodology for increasing inter-ontology agreement. Furthermore, the authors show that the presented methodology can lead to a complete unification of multiple semantically heterogeneous ontologies. Research limitations/implications - This is a conceptual study that presents a new approach for semantic unification of ontologies by a devised set of rules along with the initial experimental evidence of its feasibility and effectiveness. However, this methodology has to be fully automatically implemented and tested on a larger dataset in future research. Practical implications - This result has implication for semantic search, since a richer ontology, comprised of multiple aspects and viewpoints of the domain of knowledge, enhances discoverability and improves search results. Originality/value - To the best of the knowledge, this is the first study to examine and assess the maximal level of semantic relation-based ontology unification.
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 66(2014) no.5, S.494-518
    Type
    a
  18. Subirats, I.; Prasad, A.R.D.; Keizer, J.; Bagdanov, A.: Implementation of rich metadata formats and demantic tools using DSpace (2008) 0.02
    0.023875097 = product of:
      0.047750194 = sum of:
        0.008406092 = weight(_text_:a in 2656) [ClassicSimilarity], result of:
          0.008406092 = score(doc=2656,freq=16.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.14413087 = fieldWeight in 2656, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=2656)
        0.039344102 = sum of:
          0.011931818 = weight(_text_:information in 2656) [ClassicSimilarity], result of:
            0.011931818 = score(doc=2656,freq=6.0), product of:
              0.088794395 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.05058132 = queryNorm
              0.1343758 = fieldWeight in 2656, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.03125 = fieldNorm(doc=2656)
          0.027412282 = weight(_text_:22 in 2656) [ClassicSimilarity], result of:
            0.027412282 = score(doc=2656,freq=2.0), product of:
              0.17712717 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05058132 = queryNorm
              0.15476047 = fieldWeight in 2656, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2656)
      0.5 = coord(2/4)
    
    Abstract
    This poster explores the customization of DSpace to allow the use of the AGRIS Application Profile metadata standard and the AGROVOC thesaurus. The objective is the adaptation of DSpace, through the least invasive code changes either in the form of plug-ins or add-ons, to the specific needs of the Agricultural Sciences and Technology community. Metadata standards such as AGRIS AP, and Knowledge Organization Systems such as the AGROVOC thesaurus, provide mechanisms for sharing information in a standardized manner by recommending the use of common semantics and interoperable syntax (Subirats et al., 2007). AGRIS AP was created to enhance the description, exchange and subsequent retrieval of agricultural Document-like Information Objects (DLIOs). It is a metadata schema which draws from Metadata standards such as Dublin Core (DC), the Australian Government Locator Service Metadata (AGLS) and the Agricultural Metadata Element Set (AgMES) namespaces. It allows sharing of information across dispersed bibliographic systems (FAO, 2005). AGROVOC68 is a multilingual structured thesaurus covering agricultural and related domains. Its main role is to standardize the indexing process in order to make searching simpler and more efficient. AGROVOC is developed by FAO (Lauser et al., 2006). The customization of the DSpace is taking place in several phases. First, the AGRIS AP metadata schema was mapped onto the metadata DSpace model, with several enhancements implemented to support AGRIS AP elements. Next, AGROVOC will be integrated as a controlled vocabulary accessed through a local SKOS or OWL file. Eventually the system will be configurable to access AGROVOC through local files or remotely via webservices. Finally, spell checking and tooltips will be incorporated in the user interface to support metadata editing. Adapting DSpace to support AGRIS AP and annotation using the semantically-rich AGROVOC thesaurus transform DSpace into a powerful, domain-specific system for annotation and exchange of bibliographic metadata in the agricultural domain.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
    Type
    a
  19. Multimedia content and the Semantic Web : methods, standards, and tools (2005) 0.02
    0.02332779 = product of:
      0.04665558 = sum of:
        0.0064345747 = weight(_text_:a in 150) [ClassicSimilarity], result of:
          0.0064345747 = score(doc=150,freq=24.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.11032722 = fieldWeight in 150, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.01953125 = fieldNorm(doc=150)
        0.040221006 = sum of:
          0.010546336 = weight(_text_:information in 150) [ClassicSimilarity], result of:
            0.010546336 = score(doc=150,freq=12.0), product of:
              0.088794395 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.05058132 = queryNorm
              0.11877254 = fieldWeight in 150, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.01953125 = fieldNorm(doc=150)
          0.029674668 = weight(_text_:22 in 150) [ClassicSimilarity], result of:
            0.029674668 = score(doc=150,freq=6.0), product of:
              0.17712717 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05058132 = queryNorm
              0.16753313 = fieldWeight in 150, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.01953125 = fieldNorm(doc=150)
      0.5 = coord(2/4)
    
    Classification
    006.7 22
    Date
    7. 3.2007 19:30:22
    DDC
    006.7 22
    Footnote
    Rez. in: JASIST 58(2007) no.3, S.457-458 (A.M.A. Ahmad): "The concept of the semantic web has emerged because search engines and text-based searching are no longer adequate, as these approaches involve an extensive information retrieval process. The deployed searching and retrieving descriptors arc naturally subjective and their deployment is often restricted to the specific application domain for which the descriptors were configured. The new era of information technology imposes different kinds of requirements and challenges. Automatic extracted audiovisual features are required, as these features are more objective, domain-independent, and more native to audiovisual content. This book is a useful guide for researchers, experts, students, and practitioners; it is a very valuable reference and can lead them through their exploration and research in multimedia content and the semantic web. The book is well organized, and introduces the concept of the semantic web and multimedia content analysis to the reader through a logical sequence from standards and hypotheses through system examples, presenting relevant tools and methods. But in some chapters readers will need a good technical background to understand some of the details. Readers may attain sufficient knowledge here to start projects or research related to the book's theme; recent results and articles related to the active research area of integrating multimedia with semantic web technologies are included. This book includes full descriptions of approaches to specific problem domains such as content search, indexing, and retrieval. This book will be very useful to researchers in the multimedia content analysis field who wish to explore the benefits of emerging semantic web technologies in applying multimedia content approaches. The first part of the book covers the definition of the two basic terms multimedia content and semantic web. The Moving Picture Experts Group standards MPEG7 and MPEG21 are quoted extensively. In addition, the means of multimedia content description are elaborated upon and schematically drawn. This extensive description is introduced by authors who are actively involved in those standards and have been participating in the work of the International Organization for Standardization (ISO)/MPEG for many years. On the other hand, this results in bias against the ad hoc or nonstandard tools for multimedia description in favor of the standard approaches. This is a general book for multimedia content; more emphasis on the general multimedia description and extraction could be provided.
    Semantic web technologies are explained, and ontology representation is emphasized. There is an excellent summary of the fundamental theory behind applying a knowledge-engineering approach to vision problems. This summary represents the concept of the semantic web and multimedia content analysis. A definition of the fuzzy knowledge representation that can be used for realization in multimedia content applications has been provided, with a comprehensive analysis. The second part of the book introduces the multimedia content analysis approaches and applications. In addition, some examples of methods applicable to multimedia content analysis are presented. Multimedia content analysis is a very diverse field and concerns many other research fields at the same time; this creates strong diversity issues, as everything from low-level features (e.g., colors, DCT coefficients, motion vectors, etc.) up to the very high and semantic level (e.g., Object, Events, Tracks, etc.) are involved. The second part includes topics on structure identification (e.g., shot detection for video sequences), and object-based video indexing. These conventional analysis methods are supplemented by results on semantic multimedia analysis, including three detailed chapters on the development and use of knowledge models for automatic multimedia analysis. Starting from object-based indexing and continuing with machine learning, these three chapters are very logically organized. Because of the diversity of this research field, including several chapters of recent research results is not sufficient to cover the state of the art of multimedia. The editors of the book should write an introductory chapter about multimedia content analysis approaches, basic problems, and technical issues and challenges, and try to survey the state of the art of the field and thus introduce the field to the reader.
    The final part of the book discusses research in multimedia content management systems and the semantic web, and presents examples and applications for semantic multimedia analysis in search and retrieval systems. These chapters describe example systems in which current projects have been implemented, and include extensive results and real demonstrations. For example, real case scenarios such as ECommerce medical applications and Web services have been introduced. Topics in natural language, speech and image processing techniques and their application for multimedia indexing, and content-based retrieval have been elaborated upon with extensive examples and deployment methods. The editors of the book themselves provide the readers with a chapter about their latest research results on knowledge-based multimedia content indexing and retrieval. Some interesting applications for multimedia content and the semantic web are introduced. Applications that have taken advantage of the metadata provided by MPEG7 in order to realize advance-access services for multimedia content have been provided. The applications discussed in the third part of the book provide useful guidance to researchers and practitioners properly planning to implement semantic multimedia analysis techniques in new research and development projects in both academia and industry. A fourth part should be added to this book: performance measurements for integrated approaches of multimedia analysis and the semantic web. Performance of the semantic approach is a very sophisticated issue and requires extensive elaboration and effort. Measuring the semantic search is an ongoing research area; several chapters concerning performance measurement and analysis would be required to adequately cover this area and introduce it to readers."
    LCSH
    Information storage and retrieval systems
    RSWK
    Semantic Web / Multimedia / Automatische Indexierung / Information Retrieval
    Subject
    Semantic Web / Multimedia / Automatische Indexierung / Information Retrieval
    Information storage and retrieval systems
  20. Daconta, M.C.; Oberst, L.J.; Smith, K.T.: ¬The Semantic Web : A guide to the future of XML, Web services and knowledge management (2003) 0.02
    0.021773575 = product of:
      0.04354715 = sum of:
        0.004203046 = weight(_text_:a in 320) [ClassicSimilarity], result of:
          0.004203046 = score(doc=320,freq=4.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.072065435 = fieldWeight in 320, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=320)
        0.039344102 = sum of:
          0.011931818 = weight(_text_:information in 320) [ClassicSimilarity], result of:
            0.011931818 = score(doc=320,freq=6.0), product of:
              0.088794395 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.05058132 = queryNorm
              0.1343758 = fieldWeight in 320, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.03125 = fieldNorm(doc=320)
          0.027412282 = weight(_text_:22 in 320) [ClassicSimilarity], result of:
            0.027412282 = score(doc=320,freq=2.0), product of:
              0.17712717 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05058132 = queryNorm
              0.15476047 = fieldWeight in 320, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=320)
      0.5 = coord(2/4)
    
    Abstract
    "The Semantic Web is an extension of the current Web in which information is given well defined meaning, better enabling computers and people to work in cooperation." - Tim Berners Lee, "Scientific American", May 2001. This authoritative guide shows how the "Semantic Web" works technically and how businesses can utilize it to gain a competitive advantage. It explains what taxonomies and ontologies are as well as their importance in constructing the Semantic Web. The companion web site includes further updates as the framework develops and links to related sites.
    BK
    85.20 Betriebliche Information und Kommunikation
    Classification
    85.20 Betriebliche Information und Kommunikation
    Date
    22. 5.2007 10:37:38

Years

Languages

  • e 248
  • d 76
  • f 1
  • More… Less…

Types

  • a 213
  • el 83
  • m 51
  • s 23
  • n 11
  • x 9
  • r 3
  • More… Less…

Subjects

Classifications