Search (65 results, page 1 of 4)

  • × theme_ss:"Semantic Web"
  1. Keyser, P. de: Indexing : from thesauri to the Semantic Web (2012) 0.09
    0.09242021 = product of:
      0.27726063 = sum of:
        0.27726063 = sum of:
          0.2304783 = weight(_text_:indexing in 3197) [ClassicSimilarity], result of:
            0.2304783 = score(doc=3197,freq=34.0), product of:
              0.2202888 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.057548698 = queryNorm
              1.0462552 = fieldWeight in 3197, product of:
                5.8309517 = tf(freq=34.0), with freq of:
                  34.0 = termFreq=34.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.046875 = fieldNorm(doc=3197)
          0.046782322 = weight(_text_:22 in 3197) [ClassicSimilarity], result of:
            0.046782322 = score(doc=3197,freq=2.0), product of:
              0.20152573 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057548698 = queryNorm
              0.23214069 = fieldWeight in 3197, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3197)
      0.33333334 = coord(1/3)
    
    Abstract
    Indexing consists of both novel and more traditional techniques. Cutting-edge indexing techniques, such as automatic indexing, ontologies, and topic maps, were developed independently of older techniques such as thesauri, but it is now recognized that these older methods also hold expertise. Indexing describes various traditional and novel indexing techniques, giving information professionals and students of library and information sciences a broad and comprehensible introduction to indexing. This title consists of twelve chapters: an Introduction to subject readings and theasauri; Automatic indexing versus manual indexing; Techniques applied in automatic indexing of text material; Automatic indexing of images; The black art of indexing moving images; Automatic indexing of music; Taxonomies and ontologies; Metadata formats and indexing; Tagging; Topic maps; Indexing the web; and The Semantic Web.
    Date
    24. 8.2016 14:03:22
    LCSH
    Indexing
    Subject
    Indexing
  2. Subirats, I.; Prasad, A.R.D.; Keizer, J.; Bagdanov, A.: Implementation of rich metadata formats and demantic tools using DSpace (2008) 0.06
    0.061602607 = product of:
      0.09240391 = sum of:
        0.023949554 = product of:
          0.07184866 = sum of:
            0.07184866 = weight(_text_:objects in 2656) [ClassicSimilarity], result of:
              0.07184866 = score(doc=2656,freq=2.0), product of:
                0.30587542 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.057548698 = queryNorm
                0.23489517 = fieldWeight in 2656, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2656)
          0.33333334 = coord(1/3)
        0.068454355 = sum of:
          0.037266135 = weight(_text_:indexing in 2656) [ClassicSimilarity], result of:
            0.037266135 = score(doc=2656,freq=2.0), product of:
              0.2202888 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.057548698 = queryNorm
              0.16916946 = fieldWeight in 2656, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.03125 = fieldNorm(doc=2656)
          0.031188216 = weight(_text_:22 in 2656) [ClassicSimilarity], result of:
            0.031188216 = score(doc=2656,freq=2.0), product of:
              0.20152573 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057548698 = queryNorm
              0.15476047 = fieldWeight in 2656, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2656)
      0.6666667 = coord(2/3)
    
    Abstract
    This poster explores the customization of DSpace to allow the use of the AGRIS Application Profile metadata standard and the AGROVOC thesaurus. The objective is the adaptation of DSpace, through the least invasive code changes either in the form of plug-ins or add-ons, to the specific needs of the Agricultural Sciences and Technology community. Metadata standards such as AGRIS AP, and Knowledge Organization Systems such as the AGROVOC thesaurus, provide mechanisms for sharing information in a standardized manner by recommending the use of common semantics and interoperable syntax (Subirats et al., 2007). AGRIS AP was created to enhance the description, exchange and subsequent retrieval of agricultural Document-like Information Objects (DLIOs). It is a metadata schema which draws from Metadata standards such as Dublin Core (DC), the Australian Government Locator Service Metadata (AGLS) and the Agricultural Metadata Element Set (AgMES) namespaces. It allows sharing of information across dispersed bibliographic systems (FAO, 2005). AGROVOC68 is a multilingual structured thesaurus covering agricultural and related domains. Its main role is to standardize the indexing process in order to make searching simpler and more efficient. AGROVOC is developed by FAO (Lauser et al., 2006). The customization of the DSpace is taking place in several phases. First, the AGRIS AP metadata schema was mapped onto the metadata DSpace model, with several enhancements implemented to support AGRIS AP elements. Next, AGROVOC will be integrated as a controlled vocabulary accessed through a local SKOS or OWL file. Eventually the system will be configurable to access AGROVOC through local files or remotely via webservices. Finally, spell checking and tooltips will be incorporated in the user interface to support metadata editing. Adapting DSpace to support AGRIS AP and annotation using the semantically-rich AGROVOC thesaurus transform DSpace into a powerful, domain-specific system for annotation and exchange of bibliographic metadata in the agricultural domain.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  3. Faaborg, A.; Lagoze, C.: Semantic browsing (2003) 0.04
    0.039931707 = product of:
      0.119795114 = sum of:
        0.119795114 = sum of:
          0.06521574 = weight(_text_:indexing in 1026) [ClassicSimilarity], result of:
            0.06521574 = score(doc=1026,freq=2.0), product of:
              0.2202888 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.057548698 = queryNorm
              0.29604656 = fieldWeight in 1026, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1026)
          0.054579377 = weight(_text_:22 in 1026) [ClassicSimilarity], result of:
            0.054579377 = score(doc=1026,freq=2.0), product of:
              0.20152573 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057548698 = queryNorm
              0.2708308 = fieldWeight in 1026, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1026)
      0.33333334 = coord(1/3)
    
    Abstract
    We have created software applications that allow users to both author and use Semantic Web metadata. To create and use a layer of semantic content on top of the existing Web, we have (1) implemented a user interface that expedites the task of attributing metadata to resources on the Web, and (2) augmented a Web browser to leverage this semantic metadata to provide relevant information and tasks to the user. This project provides a framework for annotating and reorganizing existing files, pages, and sites on the Web that is similar to Vannevar Bushrsquos original concepts of trail blazing and associative indexing.
    Source
    Research and advanced technology for digital libraries : 7th European Conference, proceedings / ECDL 2003, Trondheim, Norway, August 17-22, 2003
  4. Gendt, M. van; Isaac, I.; Meij, L. van der; Schlobach, S.: Semantic Web techniques for multiple views on heterogeneous collections : a case study (2006) 0.04
    0.039543662 = product of:
      0.05931549 = sum of:
        0.03592433 = product of:
          0.10777299 = sum of:
            0.10777299 = weight(_text_:objects in 2418) [ClassicSimilarity], result of:
              0.10777299 = score(doc=2418,freq=2.0), product of:
                0.30587542 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.057548698 = queryNorm
                0.35234275 = fieldWeight in 2418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2418)
          0.33333334 = coord(1/3)
        0.023391161 = product of:
          0.046782322 = sum of:
            0.046782322 = weight(_text_:22 in 2418) [ClassicSimilarity], result of:
              0.046782322 = score(doc=2418,freq=2.0), product of:
                0.20152573 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.057548698 = queryNorm
                0.23214069 = fieldWeight in 2418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2418)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Integrated digital access to multiple collections is a prominent issue for many Cultural Heritage institutions. The metadata describing diverse collections must be interoperable, which requires aligning the controlled vocabularies that are used to annotate objects from these collections. In this paper, we present an experiment where we match the vocabularies of two collections by applying the Knowledge Representation techniques established in recent Semantic Web research. We discuss the steps that are required for such matching, namely formalising the initial resources using Semantic Web languages, and running ontology mapping tools on the resulting representations. In addition, we present a prototype that enables the user to browse the two collections using the obtained alignment while still providing her with the original vocabulary structures.
    Source
    Research and advanced technology for digital libraries : 10th European conference, proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006
  5. Multimedia content and the Semantic Web : methods, standards, and tools (2005) 0.03
    0.028614417 = product of:
      0.08584325 = sum of:
        0.08584325 = sum of:
          0.05208101 = weight(_text_:indexing in 150) [ClassicSimilarity], result of:
            0.05208101 = score(doc=150,freq=10.0), product of:
              0.2202888 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.057548698 = queryNorm
              0.23642151 = fieldWeight in 150, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.01953125 = fieldNorm(doc=150)
          0.033762235 = weight(_text_:22 in 150) [ClassicSimilarity], result of:
            0.033762235 = score(doc=150,freq=6.0), product of:
              0.20152573 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057548698 = queryNorm
              0.16753313 = fieldWeight in 150, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.01953125 = fieldNorm(doc=150)
      0.33333334 = coord(1/3)
    
    Classification
    006.7 22
    Date
    7. 3.2007 19:30:22
    DDC
    006.7 22
    Footnote
    Rez. in: JASIST 58(2007) no.3, S.457-458 (A.M.A. Ahmad): "The concept of the semantic web has emerged because search engines and text-based searching are no longer adequate, as these approaches involve an extensive information retrieval process. The deployed searching and retrieving descriptors arc naturally subjective and their deployment is often restricted to the specific application domain for which the descriptors were configured. The new era of information technology imposes different kinds of requirements and challenges. Automatic extracted audiovisual features are required, as these features are more objective, domain-independent, and more native to audiovisual content. This book is a useful guide for researchers, experts, students, and practitioners; it is a very valuable reference and can lead them through their exploration and research in multimedia content and the semantic web. The book is well organized, and introduces the concept of the semantic web and multimedia content analysis to the reader through a logical sequence from standards and hypotheses through system examples, presenting relevant tools and methods. But in some chapters readers will need a good technical background to understand some of the details. Readers may attain sufficient knowledge here to start projects or research related to the book's theme; recent results and articles related to the active research area of integrating multimedia with semantic web technologies are included. This book includes full descriptions of approaches to specific problem domains such as content search, indexing, and retrieval. This book will be very useful to researchers in the multimedia content analysis field who wish to explore the benefits of emerging semantic web technologies in applying multimedia content approaches. The first part of the book covers the definition of the two basic terms multimedia content and semantic web. The Moving Picture Experts Group standards MPEG7 and MPEG21 are quoted extensively. In addition, the means of multimedia content description are elaborated upon and schematically drawn. This extensive description is introduced by authors who are actively involved in those standards and have been participating in the work of the International Organization for Standardization (ISO)/MPEG for many years. On the other hand, this results in bias against the ad hoc or nonstandard tools for multimedia description in favor of the standard approaches. This is a general book for multimedia content; more emphasis on the general multimedia description and extraction could be provided.
    Semantic web technologies are explained, and ontology representation is emphasized. There is an excellent summary of the fundamental theory behind applying a knowledge-engineering approach to vision problems. This summary represents the concept of the semantic web and multimedia content analysis. A definition of the fuzzy knowledge representation that can be used for realization in multimedia content applications has been provided, with a comprehensive analysis. The second part of the book introduces the multimedia content analysis approaches and applications. In addition, some examples of methods applicable to multimedia content analysis are presented. Multimedia content analysis is a very diverse field and concerns many other research fields at the same time; this creates strong diversity issues, as everything from low-level features (e.g., colors, DCT coefficients, motion vectors, etc.) up to the very high and semantic level (e.g., Object, Events, Tracks, etc.) are involved. The second part includes topics on structure identification (e.g., shot detection for video sequences), and object-based video indexing. These conventional analysis methods are supplemented by results on semantic multimedia analysis, including three detailed chapters on the development and use of knowledge models for automatic multimedia analysis. Starting from object-based indexing and continuing with machine learning, these three chapters are very logically organized. Because of the diversity of this research field, including several chapters of recent research results is not sufficient to cover the state of the art of multimedia. The editors of the book should write an introductory chapter about multimedia content analysis approaches, basic problems, and technical issues and challenges, and try to survey the state of the art of the field and thus introduce the field to the reader.
    The final part of the book discusses research in multimedia content management systems and the semantic web, and presents examples and applications for semantic multimedia analysis in search and retrieval systems. These chapters describe example systems in which current projects have been implemented, and include extensive results and real demonstrations. For example, real case scenarios such as ECommerce medical applications and Web services have been introduced. Topics in natural language, speech and image processing techniques and their application for multimedia indexing, and content-based retrieval have been elaborated upon with extensive examples and deployment methods. The editors of the book themselves provide the readers with a chapter about their latest research results on knowledge-based multimedia content indexing and retrieval. Some interesting applications for multimedia content and the semantic web are introduced. Applications that have taken advantage of the metadata provided by MPEG7 in order to realize advance-access services for multimedia content have been provided. The applications discussed in the third part of the book provide useful guidance to researchers and practitioners properly planning to implement semantic multimedia analysis techniques in new research and development projects in both academia and industry. A fourth part should be added to this book: performance measurements for integrated approaches of multimedia analysis and the semantic web. Performance of the semantic approach is a very sophisticated issue and requires extensive elaboration and effort. Measuring the semantic search is an ongoing research area; several chapters concerning performance measurement and analysis would be required to adequately cover this area and introduce it to readers."
  6. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.02
    0.020311687 = product of:
      0.06093506 = sum of:
        0.06093506 = product of:
          0.18280518 = sum of:
            0.18280518 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.18280518 = score(doc=701,freq=2.0), product of:
                0.4878985 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.057548698 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  7. Semantic Web : Wege zur vernetzten Wissensgesellschaft (2006) 0.02
    0.02018048 = product of:
      0.06054144 = sum of:
        0.06054144 = weight(_text_:systematik in 117) [ClassicSimilarity], result of:
          0.06054144 = score(doc=117,freq=2.0), product of:
            0.355158 = queryWeight, product of:
              6.1714344 = idf(docFreq=250, maxDocs=44218)
              0.057548698 = queryNorm
            0.1704634 = fieldWeight in 117, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1714344 = idf(docFreq=250, maxDocs=44218)
              0.01953125 = fieldNorm(doc=117)
      0.33333334 = coord(1/3)
    
    Content
    Inhalt: Im ersten Teil wird neben der begrifflichen Klärung eine Reihe von Einstiegspunkten angeboten, ohne dass der Leser das Semantic Web in seiner Systematik und Funktionsweise kennen muss. Im Beitrag von Andreas Blumauer und Tassilo Pellegrini werden die zentralen Begriffe rund um semantische Technologien vorgestellt und zentrale Konzepte überblicksartig dargestellt. Die Arbeitsgruppe um Bernardi et al. leitet über in den Themenbereich der Arbeitsorganisation und diskutieret die Bedingungen für den Einsatz semantischer Technologien aus der Perspektive der Wissensarbeit. Dem Thema Normen und Standards wurden sogar zwei Beiträge gewidmet. Während Christian Galinski die grundsätzliche Notwendigkeit von Normen zu Zwecken der Interoperabilität aus einer Top-DownPerspektive beleuchtet, eröffnet Klaus Birkenbihl einen Einblick in die technischen Standards des Semantic Web aus der Bottom-Up-Perspektive des World Wide Web Consortiums (W3C). Mit einem Beitrag zum Innovationsgrad semantischer Technologien in der ökonomischen Koordination betreten Michael Weber und Karl Fröschl weitgehend theoretisches Neuland und legen ein Fundament für weiterführende Auseinandersetzungen. Abgerundet wird der erste Teil noch mit einem Beitrag von Bernd Wohlkinger und Tassilo Pellegrini über die technologiepolitischen Dimensionen der Semantic Web Forschung in der europäischen Union.
  8. Dextre Clarke, S.G.: Challenges and opportunities for KOS standards (2007) 0.02
    0.018193126 = product of:
      0.054579377 = sum of:
        0.054579377 = product of:
          0.109158754 = sum of:
            0.109158754 = weight(_text_:22 in 4643) [ClassicSimilarity], result of:
              0.109158754 = score(doc=4643,freq=2.0), product of:
                0.20152573 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.057548698 = queryNorm
                0.5416616 = fieldWeight in 4643, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4643)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 9.2007 15:41:14
  9. Kara, S.: ¬An ontology-based retrieval system using semantic indexing (2012) 0.02
    0.01613671 = product of:
      0.04841013 = sum of:
        0.04841013 = product of:
          0.09682026 = sum of:
            0.09682026 = weight(_text_:indexing in 3829) [ClassicSimilarity], result of:
              0.09682026 = score(doc=3829,freq=6.0), product of:
                0.2202888 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.057548698 = queryNorm
                0.4395151 = fieldWeight in 3829, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3829)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In this thesis, we present an ontology-based information extraction and retrieval system and its application to soccer domain. In general, we deal with three issues in semantic search, namely, usability, scalability and retrieval performance. We propose a keyword-based semantic retrieval approach. The performance of the system is improved considerably using domain-specific information extraction, inference and rules. Scalability is achieved by adapting a semantic indexing approach. The system is implemented using the state-of-the-art technologies in SemanticWeb and its performance is evaluated against traditional systems as well as the query expansion methods. Furthermore, a detailed evaluation is provided to observe the performance gain due to domain-specific information extraction and inference. Finally, we show how we use semantic indexing to solve simple structural ambiguities.
  10. Isaac, A.: Aligning thesauri for an integrated access to Cultural Heritage Resources (2007) 0.02
    0.015619576 = product of:
      0.046858728 = sum of:
        0.046858728 = product of:
          0.14057618 = sum of:
            0.14057618 = weight(_text_:objects in 553) [ClassicSimilarity], result of:
              0.14057618 = score(doc=553,freq=10.0), product of:
                0.30587542 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.057548698 = queryNorm
                0.4595864 = fieldWeight in 553, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=553)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Currently, a number of efforts are being carried out to integrate collections from different institutions and containing heterogeneous material. Examples of such projects are The European Library [1] and the Memory of the Netherlands [2]. A crucial point for the success of these is the availability to provide a unified access on top of the different collections, e.g. using one single vocabulary for querying or browsing the objects they contain. This is made difficult by the fact that the objects from different collections are often described using different vocabularies - thesauri, classification schemes - and are therefore not interoperable at the semantic level. To solve this problem, one can turn to semantic links - mappings - between the elements of the different vocabularies. If one knows that a concept C from a vocabulary V is semantically equivalent to a concept to a concept D from vocabulary W, then an appropriate search engine can return all the objects that were indexed against D for a query for objects described using C. We thus have an access to other collections, using a single one vocabulary. This is however an ideal situation, and hard alignment work is required to reach it. Several projects in the past have tried to implement such a solution, like MACS [3] and Renardus [4]. They have demonstrated very interesting results, but also highlighted the difficulty of aligning manually all the different vocabularies involved in practical cases, which sometimes contain hundreds of thousands of concepts. To alleviate this problem, a number of tools have been proposed in order to provide with candidate mappings between two input vocabularies, making alignment a (semi-) automatic task. Recently, the Semantic Web community has produced a lot of these alignment tools'. Several techniques are found, depending on the material they exploit: labels of concepts, structure of vocabularies, collection objects and external knowledge sources. Throughout our presentation, we will present a concrete heterogeneity case where alignment techniques have been applied to build a (pilot) browser, developed in the context of the STITCH project [5]. This browser enables a unified access to two collections of illuminated manuscripts, using the description vocabulary used in the first collection, Mandragore [6], or the one used by the second, Iconclass [7]. In our talk, we will also make the point for using unified representations the vocabulary semantic and lexical information. Additionally to ease the use of the alignment tools that have these vocabularies as input, turning to a standard representation format helps designing applications that are more generic, like the browser we demonstrate. We give pointers to SKOS [8], an open and web-enabled format currently developed by the Semantic Web community.
  11. Broughton, V.: Automatic metadata generation : Digital resource description without human intervention (2007) 0.02
    0.015594108 = product of:
      0.046782322 = sum of:
        0.046782322 = product of:
          0.093564644 = sum of:
            0.093564644 = weight(_text_:22 in 6048) [ClassicSimilarity], result of:
              0.093564644 = score(doc=6048,freq=2.0), product of:
                0.20152573 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.057548698 = queryNorm
                0.46428138 = fieldWeight in 6048, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6048)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 9.2007 15:41:14
  12. Tudhope, D.: Knowledge Organization System Services : brief review of NKOS activities and possibility of KOS registries (2007) 0.02
    0.015594108 = product of:
      0.046782322 = sum of:
        0.046782322 = product of:
          0.093564644 = sum of:
            0.093564644 = weight(_text_:22 in 100) [ClassicSimilarity], result of:
              0.093564644 = score(doc=100,freq=2.0), product of:
                0.20152573 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.057548698 = queryNorm
                0.46428138 = fieldWeight in 100, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=100)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 9.2007 15:41:14
  13. Cahier, J.-P.; Ma, X.; Zaher, L'H.: Document and item-based modeling : a hybrid method for a socio-semantic web (2010) 0.01
    0.0139705725 = product of:
      0.041911718 = sum of:
        0.041911718 = product of:
          0.12573515 = sum of:
            0.12573515 = weight(_text_:objects in 62) [ClassicSimilarity], result of:
              0.12573515 = score(doc=62,freq=2.0), product of:
                0.30587542 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.057548698 = queryNorm
                0.41106653 = fieldWeight in 62, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=62)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The paper discusses the challenges of categorising documents and "items of the world" to promote knowledge sharing in large communities of interest. We present the DOCMA method (Document and Item-based Model for Action) dedicated to end-users who have minimal or no knowledge of information science. Community members can elicit structure and indexed business items stemming from their query including projects, actors, products, places of interest, and geo-situated objects. This hybrid method was applied in a collaborative Web portal in the field of sustainability for the past two years.
  14. Cahier, J.-P.; Zaher, L'H.; Isoard , G.: Document et modèle pour l'action, une méthode pour le web socio-sémantique : application à un web 2.0 en développement durable (2010) 0.01
    0.0139705725 = product of:
      0.041911718 = sum of:
        0.041911718 = product of:
          0.12573515 = sum of:
            0.12573515 = weight(_text_:objects in 4836) [ClassicSimilarity], result of:
              0.12573515 = score(doc=4836,freq=2.0), product of:
                0.30587542 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.057548698 = queryNorm
                0.41106653 = fieldWeight in 4836, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4836)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    We present the DOCMA method (DOCument and Model for Action) focused to Socio-Semantic web applications in large communities of interest. DOCMA is dedicated to end-users without any knowledge in Information Science. Community Members can elicit, structure and index shared business items emerging from their inquiry (such as projects, actors, products, geographically situated objects of interest.). We apply DOCMA to an experiment in the field of Sustainable Development: the Cartodd-Map21 collaborative Web portal.
  15. Bianchini, C.; Willer, M.: ISBD resource and Its description in the context of the Semantic Web (2014) 0.01
    0.0139705725 = product of:
      0.041911718 = sum of:
        0.041911718 = product of:
          0.12573515 = sum of:
            0.12573515 = weight(_text_:objects in 1998) [ClassicSimilarity], result of:
              0.12573515 = score(doc=1998,freq=2.0), product of:
                0.30587542 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.057548698 = queryNorm
                0.41106653 = fieldWeight in 1998, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1998)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    This article explores the question "What is an International Standard for Bibliographic Description (ISBD) resource in the context of the Semantic Web, and what is the relationship of its description to the linked data?" This question is discussed against the background of the dichotomy between the description and access using the Semantic Web differentiation of the three logical layers: real-world objects, web of data, and special purpose (bibliographic) data. The representation of bibliographic data as linked data is discussed, distinguishing the description of a resource from the iconic/objective and the informational/subjective viewpoints. In the conclusion, the authors give views on possible directions of future development of the ISBD.
  16. Lassalle, E.; Lassalle, E.: Semantic models in information retrieval (2012) 0.01
    0.013447259 = product of:
      0.040341776 = sum of:
        0.040341776 = product of:
          0.08068355 = sum of:
            0.08068355 = weight(_text_:indexing in 97) [ClassicSimilarity], result of:
              0.08068355 = score(doc=97,freq=6.0), product of:
                0.2202888 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.057548698 = queryNorm
                0.3662626 = fieldWeight in 97, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=97)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Robertson and Spärck Jones pioneered experimental probabilistic models (Binary Independence Model) with both a typology generalizing the Boolean model, a frequency counting to calculate elementary weightings, and their combination into a global probabilistic estimation. However, this model did not consider indexing terms dependencies. An extension to mixture models (e.g., using a 2-Poisson law) made it possible to take into account these dependencies from a macroscopic point of view (BM25), as well as a shallow linguistic processing of co-references. New approaches (language models, for example "bag of words" models, probabilistic dependencies between requests and documents, and consequently Bayesian inference using Dirichlet prior conjugate) furnished new solutions for documents structuring (categorization) and for index smoothing. Presently, in these probabilistic models the main issues have been addressed from a formal point of view only. Thus, linguistic properties are neglected in the indexing language. The authors examine how a linguistic and semantic modeling can be integrated in indexing languages and set up a hybrid model that makes it possible to deal with different information retrieval problems in a unified way.
  17. Luo, Y.; Picalausa, F.; Fletcher, G.H.L.; Hidders, J.; Vansummeren, S.: Storing and indexing massive RDF datasets (2012) 0.01
    0.013447259 = product of:
      0.040341776 = sum of:
        0.040341776 = product of:
          0.08068355 = sum of:
            0.08068355 = weight(_text_:indexing in 414) [ClassicSimilarity], result of:
              0.08068355 = score(doc=414,freq=6.0), product of:
                0.2202888 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.057548698 = queryNorm
                0.3662626 = fieldWeight in 414, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=414)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The resource description framework (RDF for short) provides a flexible method for modeling information on the Web [34,40]. All data items in RDF are uniformly represented as triples of the form (subject, predicate, object), sometimes also referred to as (subject, property, value) triples. As a running example for this chapter, a small fragment of an RDF dataset concerning music and music fans is given in Fig. 2.1. Spurred by efforts like the Linking Open Data project, increasingly large volumes of data are being published in RDF. Notable contributors in this respect include areas as diverse as the government, the life sciences, Web 2.0 communities, and so on. To give an idea of the volumes of RDF data concerned, as of September 2012, there are 31,634,213,770 triples in total published by data sources participating in the Linking Open Data project. Many individual data sources (like, e.g., PubMed, DBpedia, MusicBrainz) contain hundreds of millions of triples (797, 672, and 179 millions, respectively). These large volumes of RDF data motivate the need for scalable native RDF data management solutions capabable of efficiently storing, indexing, and querying RDF data. In this chapter, we present a general and up-to-date survey of the current state of the art in RDF storage and indexing.
  18. Svensson, L.G.: Unified access : a semantic Web based model for multilingual navigation in heterogeneous data sources (2008) 0.01
    0.013175569 = product of:
      0.039526705 = sum of:
        0.039526705 = product of:
          0.07905341 = sum of:
            0.07905341 = weight(_text_:indexing in 2191) [ClassicSimilarity], result of:
              0.07905341 = score(doc=2191,freq=4.0), product of:
                0.2202888 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.057548698 = queryNorm
                0.3588626 = fieldWeight in 2191, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2191)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Most online library catalogues are not well equipped for subject search. On the one hand it is difficult to navigate the structures of the thesauri and classification systems used for indexing. Further, there is little or no support for the integration of crosswalks between different controlled vocabularies, so that a subject search query formulated using one controlled vocabulary will not find resources indexed with another knowledge organisation system even if there exists a crosswalk between them. In this paper we will look at SemanticWeb technologies and a prototype system leveraging those technologies in order to enhance the subject search possibilities in heterogeneously indexed repositories. Finally, we will have a brief look at different initiatives aimed at integrating library data into the SemanticWeb.
    Source
    New pespectives on subject indexing and classification: essays in honour of Magda Heiner-Freiling. Red.: K. Knull-Schlomann, u.a
  19. Papadakis, I. et al.: Highlighting timely information in libraries through social and semantic Web technologies (2016) 0.01
    0.01299509 = product of:
      0.03898527 = sum of:
        0.03898527 = product of:
          0.07797054 = sum of:
            0.07797054 = weight(_text_:22 in 2090) [ClassicSimilarity], result of:
              0.07797054 = score(doc=2090,freq=2.0), product of:
                0.20152573 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.057548698 = queryNorm
                0.38690117 = fieldWeight in 2090, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2090)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  20. Radhakrishnan, A.: Swoogle : an engine for the Semantic Web (2007) 0.01
    0.012422046 = product of:
      0.037266135 = sum of:
        0.037266135 = product of:
          0.07453227 = sum of:
            0.07453227 = weight(_text_:indexing in 4709) [ClassicSimilarity], result of:
              0.07453227 = score(doc=4709,freq=8.0), product of:
                0.2202888 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.057548698 = queryNorm
                0.3383389 = fieldWeight in 4709, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4709)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Content
    "Swoogle, the Semantic web search engine, is a research project carried out by the ebiquity research group in the Computer Science and Electrical Engineering Department at the University of Maryland. It's an engine tailored towards finding documents on the semantic web. The whole research paper is available here. Semantic web is touted as the next generation of online content representation where the web documents are represented in a language that is not only easy for humans but is machine readable (easing the integration of data as never thought possible) as well. And the main elements of the semantic web include data model description formats such as Resource Description Framework (RDF), a variety of data interchange formats (e.g. RDF/XML, Turtle, N-Triples), and notations such as RDF Schema (RDFS), the Web Ontology Language (OWL), all of which are intended to provide a formal description of concepts, terms, and relationships within a given knowledge domain (Wikipedia). And Swoogle is an attempt to mine and index this new set of web documents. The engine performs crawling of semantic documents like most web search engines and the search is available as web service too. The engine is primarily written in Java with the PHP used for the front-end and MySQL for database. Swoogle is capable of searching over 10,000 ontologies and indexes more that 1.3 million web documents. It also computes the importance of a Semantic Web document. The techniques used for indexing are the more google-type page ranking and also mining the documents for inter-relationships that are the basis for the semantic web. For more information on how the RDF framework can be used to relate documents, read the link here. Being a research project, and with a non-commercial motive, there is not much hype around Swoogle. However, the approach to indexing of Semantic web documents is an approach that most engines will have to take at some point of time. When the Internet debuted, there were no specific engines available for indexing or searching. The Search domain only picked up as more and more content became available. One fundamental question that I've always wondered about it is - provided that the search engines return very relevant results for a query - how to ascertain that the documents are indeed the most relevant ones available. There is always an inherent delay in indexing of document. Its here that the new semantic documents search engines can close delay. Experimenting with the concept of Search in the semantic web can only bore well for the future of search technology."

Languages

  • e 57
  • d 7
  • f 1
  • More… Less…

Types

  • a 38
  • el 21
  • m 13
  • s 4
  • x 2
  • n 1
  • More… Less…