Search (14 results, page 1 of 1)

  • × type_ss:"el"
  • × type_ss:"s"
  1. Bahls, D.; Scherp, G.; Tochtermann, K.; Hasselbring, W.: Towards a recommender system for statistical research data (2012) 0.01
    0.008254346 = product of:
      0.04127173 = sum of:
        0.04127173 = product of:
          0.08254346 = sum of:
            0.08254346 = weight(_text_:data in 474) [ClassicSimilarity], result of:
              0.08254346 = score(doc=474,freq=22.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.5793489 = fieldWeight in 474, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=474)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    To effectively promote the exchange of scientific data, retrieval services are required to suit the needs of the research community. A large amount of research in the field of economics is based on statistical data, which is often drawn from external sources like data agencies, statistical offices or affiated institutes. Since producing such data for a particular research question is expensive in time and money-if possible at all- research activities are often influenced by the availability of suitable data. Researchers choose or adjust their questions, so that the empirical foundation to support their results is given. As a consequence, researchers look out and poll for newly available data in all sorts of directions due to a lacking information infrastructure for this domain. This circumstance and a recent report from the High Level Expert Group on Scientific Data motivate recommendation and notification services for research data sets. In this paper, we elaborate on a case-based recommender system for statistical data, which allows for precise query specification. We discuss required similarity measures on the basis of cross-domain code lists and propose a system architecture. To address the problem of continuous polling, we elaborate on a notification service to inform researchers on newly avaible data sets based on their personal request.
  2. Dietze, S.; Maynard, D.; Demidova, E.; Risse, T.; Stavrakas, Y.: Entity extraction and consolidation for social Web content preservation (2012) 0.00
    0.0049775573 = product of:
      0.024887787 = sum of:
        0.024887787 = product of:
          0.049775574 = sum of:
            0.049775574 = weight(_text_:data in 470) [ClassicSimilarity], result of:
              0.049775574 = score(doc=470,freq=8.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.34936053 = fieldWeight in 470, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=470)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    With the rapidly increasing pace at which Web content is evolving, particularly social media, preserving the Web and its evolution over time becomes an important challenge. Meaningful analysis of Web content lends itself to an entity-centric view to organise Web resources according to the information objects related to them. Therefore, the crucial challenge is to extract, detect and correlate entities from a vast number of heterogeneous Web resources where the nature and quality of the content may vary heavily. While a wealth of information extraction tools aid this process, we believe that, the consolidation of automatically extracted data has to be treated as an equally important step in order to ensure high quality and non-ambiguity of generated data. In this paper we present an approach which is based on an iterative cycle exploiting Web data for (1) targeted archiving/crawling of Web objects, (2) entity extraction, and detection, and (3) entity correlation. The long-term goal is to preserve Web content over time and allow its navigation and analysis based on well-formed structured RDF data about entities.
  3. Grassi, M.; Morbidoni, C.; Nucci, M.; Fonda, S.; Ledda, G.: Pundit: semantically structured annotations for Web contents and digital libraries (2012) 0.00
    0.0049775573 = product of:
      0.024887787 = sum of:
        0.024887787 = product of:
          0.049775574 = sum of:
            0.049775574 = weight(_text_:data in 473) [ClassicSimilarity], result of:
              0.049775574 = score(doc=473,freq=8.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.34936053 = fieldWeight in 473, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=473)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    This paper introduces Pundit: a novel semantic annotation tool that allows users to create structured data while annotating Web pages relying on stand-off mark-up techniques. Pundit provides support for different types of annotations, ranging from simple comments to semantic links to Web of data entities and fine granular cross-references and citations. In addition, it can be configured to include custom controlled vocabularies and has been designed to enable groups of users to share their annotations and collaboratively create structured knowledge. Pundit allows creating semantically typed relations among heterogeneous resources, both having different multimedia formats and belonging to different pages and domains. In this way, annotations can reinforce existing data connections or create new ones and augment original information generating new semantically structured aggregations of knowledge. These can later be exploited both by other users to better navigate DL and Web content, and by applications to improve data management.
  4. Proceedings of the 2nd International Workshop on Semantic Digital Archives held in conjunction with the 16th Int. Conference on Theory and Practice of Digital Libraries (TPDL) on September 27, 2012 in Paphos, Cyprus (2012) 0.00
    0.0044798017 = product of:
      0.022399008 = sum of:
        0.022399008 = product of:
          0.044798017 = sum of:
            0.044798017 = weight(_text_:data in 468) [ClassicSimilarity], result of:
              0.044798017 = score(doc=468,freq=18.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.31442446 = fieldWeight in 468, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=468)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Archival Information Systems (AIS) are becoming increasingly important. For decades, the amount of content created digitally is growing and its complete life cycle nowadays tends to remain digital. A selection of this content is expected to be of value for the future and can thus be considered being part of our cultural heritage. However, digital content poses many challenges for long-term or indefinite preservation, e.g. digital publications become increasingly complex by the embedding of different kinds of multimedia, data in arbitrary formats and software. As soon as these digital publications become obsolete, but are still deemed to be of value in the future, they have to be transferred smoothly into appropriate AIS where they need to be kept accessible even through changing technologies. The successful previous SDA workshop in 2011 showed: Both, the library and the archiving community have made valuable contributions to the management of huge amounts of knowledge and data. However, both are approaching this topic from different views which shall be brought together to cross-fertilize each other. There are promising combinations of pertinence and provenance models since those are traditionally the prevailing knowledge organization principles of the library and archiving community, respectively. Another scientific discipline providing promising technical solutions for knowledge representation and knowledge management is semantic technologies, which is supported by appropriate W3C recommendations and a large user community. At the forefront of making the semantic web a mature and applicable reality is the linked data initiative, which already has started to be adopted by the library community. It can be expected that using semantic (web) technologies in general and linked data in particular can mature the area of digital archiving as well as technologically tighten the natural bond between digital libraries and digital archives. Semantic representations of contextual knowledge about cultural heritage objects will enhance organization and access of data and knowledge. In order to achieve a comprehensive investigation, the information seeking and document triage behaviors of users (an area also classified under the field of Human Computer Interaction) will also be included in the research.
    One of the major challenges of digital archiving is how to deal with changing technologies and changing user communities. On the one hand software, hardware and (multimedia) data formats that become obsolete and are not supported anymore still need to be kept accessible. On the other hand changing user communities necessitate technical means to formalize, detect and measure knowledge evolution. Furthermore, digital archival records are usually not deleted from the AIS and therefore, the amount of digitally archived (multimedia) content can be expected to grow rapidly. Therefore, efficient storage management solutions geared to the fact that cultural heritage is not as frequently accessed like up-to-date content residing in a digital library are required. Software and hardware needs to be tightly connected based on sophisticated knowledge representation and management models in order to face that challenge. In line with the above, contributions to the workshop should focus on, but are not limited to:
    Semantic search & semantic information retrieval in digital archives and digital libraries Semantic multimedia archives Ontologies & linked data for digital archives and digital libraries Ontologies & linked data for multimedia archives Implementations and evaluations of semantic digital archives Visualization and exploration of digital content User interfaces for semantic digital libraries User interfaces for intelligent multimedia information retrieval User studies focusing on end-user needs and information seeking behavior of end-users Theoretical and practical archiving frameworks using Semantic (Web) technologies Logical theories for digital archives Semantic (Web) services implementing the OAIS standard Semantic or logical provenance models for digital archives or digital libraries Information integration/semantic ingest (e.g. from digital libraries) Trust for ingest and data security/integrity check for long-term storage of archival records Semantic extensions of emulation/virtualization methodologies tailored for digital archives Semantic long-term storage and hardware organization tailored for AIS Migration strategies based on Semantic (Web) technologies Knowledge evolution We expect new insights and results for sustainable technical solutions for digital archiving using knowledge management techniques based on semantic technologies. The workshop emphasizes interdisciplinarity and aims at an audience consisting of scientists and scholars from the digital library, digital archiving, multimedia technology and semantic web community, the information and library sciences, as well as, from the social sciences and (digital) humanities, in particular people working on the mentioned topics. We encourage end-users, practitioners and policy-makers from cultural heritage institutions to participate as well.
  5. Gehirn, Gedächtnis, neuronale Netze (1996) 0.00
    0.004273333 = product of:
      0.021366665 = sum of:
        0.021366665 = product of:
          0.04273333 = sum of:
            0.04273333 = weight(_text_:22 in 4661) [ClassicSimilarity], result of:
              0.04273333 = score(doc=4661,freq=2.0), product of:
                0.15778607 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04505818 = queryNorm
                0.2708308 = fieldWeight in 4661, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4661)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 7.2000 18:45:51
  6. Rauber, A.: Digital preservation in data-driven science : on the importance of process capture, preservation and validation (2012) 0.00
    0.0042235977 = product of:
      0.021117989 = sum of:
        0.021117989 = product of:
          0.042235978 = sum of:
            0.042235978 = weight(_text_:data in 469) [ClassicSimilarity], result of:
              0.042235978 = score(doc=469,freq=4.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.29644224 = fieldWeight in 469, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=469)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Current digital preservation is strongly biased towards data objects: digital files of document-style objects, or encapsulated and largely self-contained objects. To provide authenticity and provenance information, comprehensive metadata models are deployed to document information on an object's context. Yet, we claim that simply documenting an objects context may not be sufficient to ensure proper provenance and to fulfill the stated preservation goals. Specifically in e-Science and business settings, capturing, documenting and preserving entire processes may be necessary to meet the preservation goals. We thus present an approach for capturing, documenting and preserving processes, and means to assess their authenticity upon re-execution. We will discuss options as well as limitations and open challenges to achieve sound preservation, speci?cally within scientific processes.
  7. Voigt, M.; Mitschick, A.; Schulz, J.: Yet another triple store benchmark? : practical experiences with real-world data (2012) 0.00
    0.0042235977 = product of:
      0.021117989 = sum of:
        0.021117989 = product of:
          0.042235978 = sum of:
            0.042235978 = weight(_text_:data in 476) [ClassicSimilarity], result of:
              0.042235978 = score(doc=476,freq=4.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.29644224 = fieldWeight in 476, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=476)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Although quite a number of RDF triple store benchmarks have already been conducted and published, it appears to be not that easy to find the right storage solution for your particular Semantic Web project. A basic reason is the lack of comprehensive performance tests with real-world data. Confronted with this problem, we setup and ran our own tests with a selection of four up-to-date triple store implementations - and came to interesting findings. In this paper, we briefly present the benchmark setup including the store configuration, the datasets, and the test queries. Based on a set of metrics, our results demonstrate the importance of real-world datasets in identifying anomalies or di?erences in reasoning. Finally, we must state that it is indeed difficult to give a general recommendation as no store wins in every field.
  8. Open MIND (2015) 0.00
    0.0030523809 = product of:
      0.015261904 = sum of:
        0.015261904 = product of:
          0.030523809 = sum of:
            0.030523809 = weight(_text_:22 in 1648) [ClassicSimilarity], result of:
              0.030523809 = score(doc=1648,freq=2.0), product of:
                0.15778607 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04505818 = queryNorm
                0.19345059 = fieldWeight in 1648, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1648)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    27. 1.2015 11:48:22
  9. Networked knowledge organization systems (2001) 0.00
    0.0029865343 = product of:
      0.014932672 = sum of:
        0.014932672 = product of:
          0.029865343 = sum of:
            0.029865343 = weight(_text_:data in 6473) [ClassicSimilarity], result of:
              0.029865343 = score(doc=6473,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.2096163 = fieldWeight in 6473, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6473)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Knowledge Organization Systems can comprise thesauri and other controlled lists of keywords, ontologies, classification systems, clustering approaches, taxonomies, gazetteers, dictionaries, lexical databases, concept maps/spaces, semantic road maps, etc. These schemas enable knowledge structuring and management, knowledge-based data processing and systematic access to knowledge structures in individual collections and digital libraries. Used as interactive information services on the Internet they have an increased potential to support the description, discovery and retrieval of heterogeneous information resources and to contribute to an overall resource discovery infrastructure
  10. Alexiev, V.: Implementing CIDOC CRM search based on fundamental relations and OWLIM rules (2012) 0.00
    0.0024887787 = product of:
      0.012443894 = sum of:
        0.012443894 = product of:
          0.024887787 = sum of:
            0.024887787 = weight(_text_:data in 467) [ClassicSimilarity], result of:
              0.024887787 = score(doc=467,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.17468026 = fieldWeight in 467, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=467)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    The CIDOC CRM provides an ontology for describing entities, properties and relationships appearing in cultural heritage (CH) documentation, history and archeology. CRM promotes shared understanding by providing an extensible semantic framework that any CH information can be mapped to. CRM data is usually represented in semantic web format (RDF) and comprises complex graphs of nodes and properties. An important question is how a user can search through such complex graphs, since the number of possible combinations is staggering. One approach "compresses" the semantic network by mapping many CRM entity classes to a few "Fundamental Concepts" (FC), and mapping whole networks of CRM properties to fewer "Fundamental Relations" (FR). These FC and FRs serve as a "search index" over the CRM semantic web and allow the user to use a simpler query vocabulary. We describe an implementation of CRM FR Search based on OWLIM Rules, done as part of the ResearchSpace (RS) project. We describe the technical details, problems and difficulties encountered, benefits and disadvantages of using OWLIM rules, and preliminary performance results. We provide implementation experience that can be valuable for further implementation, definition and maintenance of CRM FRs.
  11. Wartena, C.; Sommer, M.: Automatic classification of scientific records using the German Subject Heading Authority File (SWD) (2012) 0.00
    0.0024887787 = product of:
      0.012443894 = sum of:
        0.012443894 = product of:
          0.024887787 = sum of:
            0.024887787 = weight(_text_:data in 472) [ClassicSimilarity], result of:
              0.024887787 = score(doc=472,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.17468026 = fieldWeight in 472, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=472)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    The following paper deals with an automatic text classification method which does not require training documents. For this method the German Subject Heading Authority File (SWD), provided by the linked data service of the German National Library is used. Recently the SWD was enriched with notations of the Dewey Decimal Classification (DDC). In consequence it became possible to utilize the subject headings as textual representations for the notations of the DDC. Basically, we we derive the classification of a text from the classification of the words in the text given by the thesaurus. The method was tested by classifying 3826 OAI-Records from 7 different repositories. Mean reciprocal rank and recall were chosen as evaluation measure. Direct comparison to a machine learning method has shown that this method is definitely competitive. Thus we can conclude that the enriched version of the SWD provides high quality information with a broad coverage for classification of German scientific articles.
  12. Bozzato, L.; Braghin, S.; Trombetta, A.: ¬A method and guidelines for the cooperation of ontologies and relational databases in Semantic Web applications (2012) 0.00
    0.0024887787 = product of:
      0.012443894 = sum of:
        0.012443894 = product of:
          0.024887787 = sum of:
            0.024887787 = weight(_text_:data in 475) [ClassicSimilarity], result of:
              0.024887787 = score(doc=475,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.17468026 = fieldWeight in 475, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=475)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Ontologies are a well-affirmed way of representing complex structured information and they provide a sound conceptual foundation to Semantic Web technologies. On the other hand, a huge amount of information available on the web is stored in legacy relational databases. The issues raised by the collaboration between such worlds are well known and addressed by consolidated mapping languages. Nevertheless, to the best of our knowledge, a best practice for such cooperation is missing: in this work we thus present a method to guide the definition of cooperations between ontology-based and relational databases systems. Our method, mainly based on ideas from knowledge reuse and re-engineering, is aimed at the separation of data between database and ontology instances and at the definition of suitable mappings in both directions, taking advantage of the representation possibilities offered by both models. We present the steps of our method along with guidelines for their application. Finally, we propose an example of its deployment in the context of a large repository of bio-medical images we developed.
  13. Metrics in research : for better or worse? (2016) 0.00
    0.001991023 = product of:
      0.009955115 = sum of:
        0.009955115 = product of:
          0.01991023 = sum of:
            0.01991023 = weight(_text_:data in 3312) [ClassicSimilarity], result of:
              0.01991023 = score(doc=3312,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.1397442 = fieldWeight in 3312, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3312)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    If you are an academic researcher but did not earn (yet) your Nobel prize or your retirement, it is unlikely you never heard about research metrics. These metrics aim at quantifying various aspects of the research process, at the level of individual researchers (e.g. h-index, altmetrics), scientific journals (e.g. impact factors) or entire universities/ countries (e.g. rankings). Although such "measurements" have existed in a simple form for a long time, their widespread calculation was enabled by the advent of the digital era (large amount of data available worldwide in a computer-compatible format). And in this new era, what becomes technically possible will be done, and what is done and appears to simplify our lives will be used. As a result, a rapidly growing number of statistics-based numerical indices are nowadays fed into decisionmaking processes. This is true in nearly all aspects of society (politics, economy, education and private life), and in particular in research, where metrics play an increasingly important role in determining positions, funding, awards, research programs, career choices, reputations, etc.
  14. OWLED 2009; OWL: Experiences and Directions, Sixth International Workshop, Chantilly, Virginia, USA, 23-24 October 2009, Co-located with ISWC 2009. (2009) 0.00
    0.0014932671 = product of:
      0.007466336 = sum of:
        0.007466336 = product of:
          0.014932672 = sum of:
            0.014932672 = weight(_text_:data in 3391) [ClassicSimilarity], result of:
              0.014932672 = score(doc=3391,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.10480815 = fieldWeight in 3391, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3391)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Content
    Short Papers * A Database Backend for OWL, Jörg Henss, Joachim Kleb and Stephan Grimm. * Unifying SysML and OWL, Henson Graves. * The OWLlink Protocol, Thorsten Liebig, Marko Luther and Olaf Noppens. * A Reasoning Broker Framework for OWL, Juergen Bock, Tuvshintur Tserendorj, Yongchun Xu, Jens Wissmann and Stephan Grimm. * Change Representation For OWL 2 Ontologies, Raul Palma, Peter Haase, Oscar Corcho and Asunción Gómez-Pérez. * Practical Aspects of Query Rewriting for OWL 2, Héctor Pérez-Urbina, Ian Horrocks and Boris Motik. * CSage: Use of a Configurable Semantically Attributed Graph Editor as Framework for Editing and Visualization, Lawrence Levin. * A Conformance Test Suite for the OWL 2 RL/RDF Rules Language and the OWL 2 RDF-Based Semantics, Michael Schneider and Kai Mainzer. * Improving the Data Quality of Relational Databases using OBDA and OWL 2 QL, Olivier Cure. * Temporal Classes and OWL, Natalya Keberle. * Using Ontologies for Medical Image Retrieval - An Experiment, Jasmin Opitz, Bijan Parsia and Ulrike Sattler. * Task Representation and Retrieval in an Ontology-Guided Modelling System, Yuan Ren, Jens Lemcke, Andreas Friesen, Tirdad Rahmani, Srdjan Zivkovic, Boris Gregorcic, Andreas Bartho, Yuting Zhao and Jeff Z. Pan. * A platform for reasoning with OWL-EL knowledge bases in a Peer-to-Peer environment, Alexander De Leon and Michel Dumontier. * Axiomé: a Tool for the Elicitation and Management of SWRL Rules, Saeed Hassanpour, Martin O'Connor and Amar Das. * SQWRL: A Query Language for OWL, Martin O'Connor and Amar Das. * Classifying ELH Ontologies In SQL Databases, Vincent Delaitre and Yevgeny Kazakov. * A Semantic Web Approach to Represent and Retrieve Information in a Corporate Memory, Ana B. Rios-Alvarado, R. Carolina Medina-Ramirez and Ricardo Marcelin-Jimenez. * Towards a Graphical Notation for OWL 2, Elisa Kendall, Roy Bell, Roger Burkhart, Mark Dutra and Evan Wallace.