Search (9 results, page 1 of 1)

  • × type_ss:"el"
  • × type_ss:"s"
  • × year_i:[2010 TO 2020}
  1. Voigt, M.; Mitschick, A.; Schulz, J.: Yet another triple store benchmark? : practical experiences with real-world data (2012) 0.03
    0.027691858 = product of:
      0.11076743 = sum of:
        0.025159499 = weight(_text_:web in 476) [ClassicSimilarity], result of:
          0.025159499 = score(doc=476,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.21634221 = fieldWeight in 476, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=476)
        0.06044843 = weight(_text_:world in 476) [ClassicSimilarity], result of:
          0.06044843 = score(doc=476,freq=6.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.44132966 = fieldWeight in 476, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.046875 = fieldNorm(doc=476)
        0.025159499 = weight(_text_:web in 476) [ClassicSimilarity], result of:
          0.025159499 = score(doc=476,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.21634221 = fieldWeight in 476, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=476)
      0.25 = coord(3/12)
    
    Abstract
    Although quite a number of RDF triple store benchmarks have already been conducted and published, it appears to be not that easy to find the right storage solution for your particular Semantic Web project. A basic reason is the lack of comprehensive performance tests with real-world data. Confronted with this problem, we setup and ran our own tests with a selection of four up-to-date triple store implementations - and came to interesting findings. In this paper, we briefly present the benchmark setup including the store configuration, the datasets, and the test queries. Based on a set of metrics, our results demonstrate the importance of real-world datasets in identifying anomalies or di?erences in reasoning. Finally, we must state that it is indeed difficult to give a general recommendation as no store wins in every field.
  2. Dietze, S.; Maynard, D.; Demidova, E.; Risse, T.; Stavrakas, Y.: Entity extraction and consolidation for social Web content preservation (2012) 0.02
    0.02096625 = product of:
      0.1257975 = sum of:
        0.06289875 = weight(_text_:web in 470) [ClassicSimilarity], result of:
          0.06289875 = score(doc=470,freq=18.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.5408555 = fieldWeight in 470, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=470)
        0.06289875 = weight(_text_:web in 470) [ClassicSimilarity], result of:
          0.06289875 = score(doc=470,freq=18.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.5408555 = fieldWeight in 470, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=470)
      0.16666667 = coord(2/12)
    
    Abstract
    With the rapidly increasing pace at which Web content is evolving, particularly social media, preserving the Web and its evolution over time becomes an important challenge. Meaningful analysis of Web content lends itself to an entity-centric view to organise Web resources according to the information objects related to them. Therefore, the crucial challenge is to extract, detect and correlate entities from a vast number of heterogeneous Web resources where the nature and quality of the content may vary heavily. While a wealth of information extraction tools aid this process, we believe that, the consolidation of automatically extracted data has to be treated as an equally important step in order to ensure high quality and non-ambiguity of generated data. In this paper we present an approach which is based on an iterative cycle exploiting Web data for (1) targeted archiving/crawling of Web objects, (2) entity extraction, and detection, and (3) entity correlation. The long-term goal is to preserve Web content over time and allow its navigation and analysis based on well-formed structured RDF data about entities.
  3. Grassi, M.; Morbidoni, C.; Nucci, M.; Fonda, S.; Ledda, G.: Pundit: semantically structured annotations for Web contents and digital libraries (2012) 0.01
    0.013977501 = product of:
      0.083865 = sum of:
        0.0419325 = weight(_text_:web in 473) [ClassicSimilarity], result of:
          0.0419325 = score(doc=473,freq=8.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.36057037 = fieldWeight in 473, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=473)
        0.0419325 = weight(_text_:web in 473) [ClassicSimilarity], result of:
          0.0419325 = score(doc=473,freq=8.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.36057037 = fieldWeight in 473, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=473)
      0.16666667 = coord(2/12)
    
    Abstract
    This paper introduces Pundit: a novel semantic annotation tool that allows users to create structured data while annotating Web pages relying on stand-off mark-up techniques. Pundit provides support for different types of annotations, ranging from simple comments to semantic links to Web of data entities and fine granular cross-references and citations. In addition, it can be configured to include custom controlled vocabularies and has been designed to enable groups of users to share their annotations and collaboratively create structured knowledge. Pundit allows creating semantically typed relations among heterogeneous resources, both having different multimedia formats and belonging to different pages and domains. In this way, annotations can reinforce existing data connections or create new ones and augment original information generating new semantically structured aggregations of knowledge. These can later be exploited both by other users to better navigate DL and Web content, and by applications to improve data management.
  4. Bozzato, L.; Braghin, S.; Trombetta, A.: ¬A method and guidelines for the cooperation of ontologies and relational databases in Semantic Web applications (2012) 0.01
    0.012104871 = product of:
      0.07262922 = sum of:
        0.03631461 = weight(_text_:web in 475) [ClassicSimilarity], result of:
          0.03631461 = score(doc=475,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.3122631 = fieldWeight in 475, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=475)
        0.03631461 = weight(_text_:web in 475) [ClassicSimilarity], result of:
          0.03631461 = score(doc=475,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.3122631 = fieldWeight in 475, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=475)
      0.16666667 = coord(2/12)
    
    Abstract
    Ontologies are a well-affirmed way of representing complex structured information and they provide a sound conceptual foundation to Semantic Web technologies. On the other hand, a huge amount of information available on the web is stored in legacy relational databases. The issues raised by the collaboration between such worlds are well known and addressed by consolidated mapping languages. Nevertheless, to the best of our knowledge, a best practice for such cooperation is missing: in this work we thus present a method to guide the definition of cooperations between ontology-based and relational databases systems. Our method, mainly based on ideas from knowledge reuse and re-engineering, is aimed at the separation of data between database and ontology instances and at the definition of suitable mappings in both directions, taking advantage of the representation possibilities offered by both models. We present the steps of our method along with guidelines for their application. Finally, we propose an example of its deployment in the context of a large repository of bio-medical images we developed.
  5. Proceedings of the 2nd International Workshop on Semantic Digital Archives held in conjunction with the 16th Int. Conference on Theory and Practice of Digital Libraries (TPDL) on September 27, 2012 in Paphos, Cyprus (2012) 0.01
    0.010271324 = product of:
      0.06162794 = sum of:
        0.03081397 = weight(_text_:web in 468) [ClassicSimilarity], result of:
          0.03081397 = score(doc=468,freq=12.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.26496404 = fieldWeight in 468, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0234375 = fieldNorm(doc=468)
        0.03081397 = weight(_text_:web in 468) [ClassicSimilarity], result of:
          0.03081397 = score(doc=468,freq=12.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.26496404 = fieldWeight in 468, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0234375 = fieldNorm(doc=468)
      0.16666667 = coord(2/12)
    
    Abstract
    Archival Information Systems (AIS) are becoming increasingly important. For decades, the amount of content created digitally is growing and its complete life cycle nowadays tends to remain digital. A selection of this content is expected to be of value for the future and can thus be considered being part of our cultural heritage. However, digital content poses many challenges for long-term or indefinite preservation, e.g. digital publications become increasingly complex by the embedding of different kinds of multimedia, data in arbitrary formats and software. As soon as these digital publications become obsolete, but are still deemed to be of value in the future, they have to be transferred smoothly into appropriate AIS where they need to be kept accessible even through changing technologies. The successful previous SDA workshop in 2011 showed: Both, the library and the archiving community have made valuable contributions to the management of huge amounts of knowledge and data. However, both are approaching this topic from different views which shall be brought together to cross-fertilize each other. There are promising combinations of pertinence and provenance models since those are traditionally the prevailing knowledge organization principles of the library and archiving community, respectively. Another scientific discipline providing promising technical solutions for knowledge representation and knowledge management is semantic technologies, which is supported by appropriate W3C recommendations and a large user community. At the forefront of making the semantic web a mature and applicable reality is the linked data initiative, which already has started to be adopted by the library community. It can be expected that using semantic (web) technologies in general and linked data in particular can mature the area of digital archiving as well as technologically tighten the natural bond between digital libraries and digital archives. Semantic representations of contextual knowledge about cultural heritage objects will enhance organization and access of data and knowledge. In order to achieve a comprehensive investigation, the information seeking and document triage behaviors of users (an area also classified under the field of Human Computer Interaction) will also be included in the research.
    Semantic search & semantic information retrieval in digital archives and digital libraries Semantic multimedia archives Ontologies & linked data for digital archives and digital libraries Ontologies & linked data for multimedia archives Implementations and evaluations of semantic digital archives Visualization and exploration of digital content User interfaces for semantic digital libraries User interfaces for intelligent multimedia information retrieval User studies focusing on end-user needs and information seeking behavior of end-users Theoretical and practical archiving frameworks using Semantic (Web) technologies Logical theories for digital archives Semantic (Web) services implementing the OAIS standard Semantic or logical provenance models for digital archives or digital libraries Information integration/semantic ingest (e.g. from digital libraries) Trust for ingest and data security/integrity check for long-term storage of archival records Semantic extensions of emulation/virtualization methodologies tailored for digital archives Semantic long-term storage and hardware organization tailored for AIS Migration strategies based on Semantic (Web) technologies Knowledge evolution We expect new insights and results for sustainable technical solutions for digital archiving using knowledge management techniques based on semantic technologies. The workshop emphasizes interdisciplinarity and aims at an audience consisting of scientists and scholars from the digital library, digital archiving, multimedia technology and semantic web community, the information and library sciences, as well as, from the social sciences and (digital) humanities, in particular people working on the mentioned topics. We encourage end-users, practitioners and policy-makers from cultural heritage institutions to participate as well.
  6. Alexiev, V.: Implementing CIDOC CRM search based on fundamental relations and OWLIM rules (2012) 0.01
    0.009883585 = product of:
      0.05930151 = sum of:
        0.029650755 = weight(_text_:web in 467) [ClassicSimilarity], result of:
          0.029650755 = score(doc=467,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25496176 = fieldWeight in 467, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=467)
        0.029650755 = weight(_text_:web in 467) [ClassicSimilarity], result of:
          0.029650755 = score(doc=467,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25496176 = fieldWeight in 467, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=467)
      0.16666667 = coord(2/12)
    
    Abstract
    The CIDOC CRM provides an ontology for describing entities, properties and relationships appearing in cultural heritage (CH) documentation, history and archeology. CRM promotes shared understanding by providing an extensible semantic framework that any CH information can be mapped to. CRM data is usually represented in semantic web format (RDF) and comprises complex graphs of nodes and properties. An important question is how a user can search through such complex graphs, since the number of possible combinations is staggering. One approach "compresses" the semantic network by mapping many CRM entity classes to a few "Fundamental Concepts" (FC), and mapping whole networks of CRM properties to fewer "Fundamental Relations" (FR). These FC and FRs serve as a "search index" over the CRM semantic web and allow the user to use a simpler query vocabulary. We describe an implementation of CRM FR Search based on OWLIM Rules, done as part of the ResearchSpace (RS) project. We describe the technical details, problems and difficulties encountered, benefits and disadvantages of using OWLIM rules, and preliminary performance results. We provide implementation experience that can be valuable for further implementation, definition and maintenance of CRM FRs.
  7. Metrics in research : for better or worse? (2016) 0.00
    0.0019388841 = product of:
      0.02326661 = sum of:
        0.02326661 = weight(_text_:world in 3312) [ClassicSimilarity], result of:
          0.02326661 = score(doc=3312,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.16986786 = fieldWeight in 3312, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03125 = fieldNorm(doc=3312)
      0.083333336 = coord(1/12)
    
    Content
    Inhalt: Metrics in Research - For better or worse? / Jozica Dolenc, Philippe Hünenberger Oliver Renn - A brief visual history of research metrics / Oliver Renn, Jozica Dolenc, Joachim Schnabl - Bibliometry: The wizard of O's / Philippe Hünenberger - The grip of bibliometrics - A student perspective / Matthias Tinzl - Honesty and transparency to taxpayers is the long-term fundament for stable university funding / Wendelin J. Stark - Beyond metrics: Managing the performance of your work / Charlie Rapple - Scientific profiling instead of bibliometrics: Key performance indicators of the future / Rafael Ball - More knowledge, less numbers / Carl Philipp Rosenau - Do we really need BIBLIO-metrics to evaluate individual researchers? / Rüdiger Mutz - Using research metrics responsibly and effectively as a researcher / Peter I. Darroch, Lisa H. Colledge - Metrics in research: More (valuable) questions than answers / Urs Hugentobler - Publication of research results: Use and abuse / Wilfred F. van Gunsteren - Wanted: Transparent algorithms, interpretation skills, common sense / Eva E. Wille - Impact factors, the h-index, and citation hype - Metrics in research from the point of view of a journal editor / Renato Zenobi - Rashomon or metrics in a publisher's world / Gabriella Karger - The impact factor and I: A love-hate relationship / Jean-Christophe Leroux - Personal experiences bringing altmetrics to the academic market / Ben McLeish - Fatally attracted by numbers? / Oliver Renn - On computable numbers / Gerd Folkers, Laura Folkers - ScienceMatters - Single observation science publishing and linking observations to create an internet of science / Lawrence Rajendran.
  8. nestor-Handbuch : eine kleine Enzyklopädie der digitalen Langzeitarchivierung (2010) 0.00
    0.0016554145 = product of:
      0.019864973 = sum of:
        0.019864973 = product of:
          0.039729945 = sum of:
            0.039729945 = weight(_text_:2.0 in 3716) [ClassicSimilarity], result of:
              0.039729945 = score(doc=3716,freq=2.0), product of:
                0.20667298 = queryWeight, product of:
                  5.799733 = idf(docFreq=363, maxDocs=44218)
                  0.035634913 = queryNorm
                0.1922358 = fieldWeight in 3716, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.799733 = idf(docFreq=363, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3716)
          0.5 = coord(1/2)
      0.083333336 = coord(1/12)
    
    Abstract
    Glücklicher als Sie mit Ihren privaten digitalen Daten sind da die Astronomen, wenn sie nach Daten von Himmels-Beobachtungen fahnden, die bereits Jahrzehnte zurückliegen. Obwohl die Bild- und Datenarchive dieser Beobachtungen in vielfältigen und sehr unterschiedlichen Formaten abgespeichert wurden, gibt es immer die Möglichkeit, über geeignete Interface-Verfahren die Originaldaten zu lesen und zu interpretieren. Dies ist der Fall, weil durch das sogenannte Virtuelle Observatorium weltweit die Archive für astronomische Beobachtungen vernetzt und immer in den neuesten digitalen Formaten zugänglich sind, seien es digitale Aufnahmen von Asteroiden, Planetenbewegungen, der Milchstrasse oder auch Simulationen des Urknalls. Selbst Photoplatten von Beginn des 20. Jahrhunderts wurden systematisch digitalisiert und stehen zur Wiederverwendung bereit. So sind ältere und neue digitale Daten und Bilder gemeinsam nutzbar und gewähren einen Blick in das Universum, der sich über weit mehr Wellenlängen erstreckt als die Sinne des Menschen allein wahrnehmen können. Wir freuen uns, Ihnen mit dem nestor Handbuch "Eine kleine Enzyklopädie der digitalen Langzeitarchivierung" den aktuellen Wissensstand über die Langzeitarchivierung digitaler Objekte im Überblick sowie aus vielen Teilbereichen nun auch in gedruckter Form präsentieren zu können. Schon seit Frühjahr 2007 ist das Handbuch in digitaler Version unter http://nestor.sub.uni-goettingen.de/handbuch/ verfügbar und seitdem in mehreren Intervallen aktualisiert worden. Die nun vorliegende Version 2.0 - hier gedruckt und unter o.g. URL auch weiterhin entgeltfrei herunterladbar - wurde neu strukturiert, um neue Themenfelder ergänzt und bislang schon vorhandene Beiträge wurden, wo fachlich geboten, überarbeitet. Aus seiner Entstehung ergibt sich eine gewisse Heterogenität der einzelnen Kapitel untereinander, z.B. bezüglich der Ausführlichkeit des behandelten Themas oder des Schreibstils. Der Herausgeberkreis hat nicht primär das Ziel verfolgt, dies redaktionell lektorierend auszugleichen oder ein insgesamt kohärentes Gesamtwerk vorzulegen. Vielmehr geht es ihm darum, der deutschsprachigen Gemeinschaft eine möglichst aktuelle "Kleine Enzyklopädie der digitalen Langzeitarchivierung" anbieten zu können.
  9. Open MIND (2015) 0.00
    0.0010058414 = product of:
      0.012070097 = sum of:
        0.012070097 = product of:
          0.024140194 = sum of:
            0.024140194 = weight(_text_:22 in 1648) [ClassicSimilarity], result of:
              0.024140194 = score(doc=1648,freq=2.0), product of:
                0.12478739 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035634913 = queryNorm
                0.19345059 = fieldWeight in 1648, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1648)
          0.5 = coord(1/2)
      0.083333336 = coord(1/12)
    
    Date
    27. 1.2015 11:48:22