Search (20 results, page 1 of 1)

  • × type_ss:"s"
  • × type_ss:"el"
  1. Open MIND (2015) 0.02
    0.017778447 = product of:
      0.035556894 = sum of:
        0.008924231 = weight(_text_:in in 1648) [ClassicSimilarity], result of:
          0.008924231 = score(doc=1648,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.15028831 = fieldWeight in 1648, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1648)
        0.01184633 = weight(_text_:und in 1648) [ClassicSimilarity], result of:
          0.01184633 = score(doc=1648,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.12243814 = fieldWeight in 1648, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1648)
        0.014786332 = product of:
          0.029572664 = sum of:
            0.029572664 = weight(_text_:22 in 1648) [ClassicSimilarity], result of:
              0.029572664 = score(doc=1648,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.19345059 = fieldWeight in 1648, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1648)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    This is an edited collection of 39 original papers and as many commentaries and replies. The target papers and replies were written by senior members of the MIND Group, while all commentaries were written by junior group members. All papers and commentaries have undergone a rigorous process of anonymous peer review, during which the junior members of the MIND Group acted as reviewers. The final versions of all the target articles, commentaries and replies have undergone additional editorial review. Besides offering a cross-section of ongoing, cutting-edge research in philosophy and cognitive science, this collection is also intended to be a free electronic resource for teaching. It therefore also contains a selection of online supporting materials, pointers to video and audio files and to additional free material supplied by the 92 authors represented in this volume. We will add more multimedia material, a searchable literature database, and tools to work with the online version in the future. All contributions to this collection are strictly open access. They can be downloaded, printed, and reproduced by anyone.
    Content
    Vgl. den Artikel: Lenzen, M.: Vor der Quadratwurzel steht die Quadratzahl. In: http://www.faz.net/aktuell/feuilleton/forschung-und-lehre/uni-mainz-stellt-publikationen-von-hirnforschern-online-13379697.html.
    Date
    27. 1.2015 11:48:22
  2. Ernst-von-Glasersfeld-Lectures 2015 (2015) 0.02
    0.017718678 = product of:
      0.053156033 = sum of:
        0.006246961 = weight(_text_:in in 2664) [ClassicSimilarity], result of:
          0.006246961 = score(doc=2664,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.10520181 = fieldWeight in 2664, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2664)
        0.04690907 = weight(_text_:und in 2664) [ClassicSimilarity], result of:
          0.04690907 = score(doc=2664,freq=16.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.4848303 = fieldWeight in 2664, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2664)
      0.33333334 = coord(2/6)
    
    Abstract
    Der vorliegende Band dokumentiert die Ernst-von-Glasersfeld-Lectures 2015 an der Universität Innsbruck. Neben den Vorträgen von Siegfried J. Schmidt und Gebhard Rusch sind zwei weitere Beiträge aus dem Ernst-von-Glasersfeld- Archiv abgedruckt. Diese befassen sich in dokumentarischer und medienkünstlerischer Absicht mit dem Lana-Projekt und "Yerkish", der ersten Zeichensprache für Primaten, die der Philosoph und Kommunikationswissenschaftler Ernst von Glasersfeld (1917-2010) zusammen mit Piero Pisani Anfang der 70er Jahre an der University of Georgia entwickelte.
    Content
    Enthält die Beiträge: Siegfried J. Schmidt:vorläufig endgültig vorläufig - Philosophieren nach Ernst von Glasersfeld - Gebhard Rusch:Sicherheit und Freiheit - Jona Hoier, Markus Murschitz und Theo Hug: BANANA PERIOD - Ein Lichtprojekt an den Nahtstellen von Medienkunst und Wissenschaftskommunikation - Michael Schorner: Sprechen Sie Yerkish? - Ernst von Glasersfelds Beitrag zum LANA Projekt - zwischen Operationalismus und Radikalem Konstruktivismus.
  3. Bibliotheken und die Vernetzung des Wissens (2002) 0.02
    0.017532641 = product of:
      0.05259792 = sum of:
        0.011288359 = weight(_text_:in in 1741) [ClassicSimilarity], result of:
          0.011288359 = score(doc=1741,freq=20.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.19010136 = fieldWeight in 1741, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=1741)
        0.04130956 = weight(_text_:und in 1741) [ClassicSimilarity], result of:
          0.04130956 = score(doc=1741,freq=38.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.42695636 = fieldWeight in 1741, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=1741)
      0.33333334 = coord(2/6)
    
    Abstract
    Bibliotheken verlassen zunehmend ihre traditionellen Pfade (Sammlung und Bereitstellung von Informationen und Medien) und erproben neue institutionelle Ausrichtungen. Sie übernehmen zusätzlich im Hinblick auf ihre Zielgruppen soziale, kulturelle und lernunterstützenden Funktionen. Diese Profilveränderung kann aber nur in einer vernetzten Struktur von Bildungs-, Kultureinrichtungen und sozialen Institutionen in einem räumlich gesamtzusammenhang stattfinden. diesem Zweck dienen kooperative Bildungs- und Lernarrangements abenso wie an Nutzerbedürfnissen orientierte technische systeme der digitalen Informationspräsentation, Recherche und Interaktion. Praktikern wie wissenschaftlich Interessierten aus Bibliotheken und deren kooperierenden Partnern aus Bildung und Kultur bietet dieser Band eine Fülle praxisorientierter Beispiele, die mögliche Profilveränderungen und Gestaltungsspielräume für Bibliotheken aufzeigen
    Classification
    AN 65100 Allgemeines / Buch- und Bibliothekswesen, Informationswissenschaft / Bibliothekswesen / Grundlagen des Bibliothekswesens / Begriff, Wesen des Bibliothek
    Content
    Enthält die Beiträge: (1) Organisationsveränderung durch kooperatives Handeln: STANG, R.: Vernetzung als Zukunftsmodell; BUßMANN, I. u.a.: Innovative Lernarrangements in der institutionellen Umsetzung; KIL, M.: Lernveränderung = Organisationsveränderung?; STANNETT, A. u. A. CHADWICK: Lifelong learning, libraries and museums in the United Kingdom; UMLAUF, K.: Professionsveränderung in netzwerkbezogenen Arbeitsumgebungen - (2) Lernender Stadtteil: STEFFEN, G.: Die Stadt und der Stadtteil; PUHL, A.: Stadtteilbüchereien in der Fremdsicht; PUHL, A.: Institutionelle Kooperation in der Bildungsberatung - (3) Informationsnetze - gesucht und nicht gefunden: GÖDERT, W.: Zwischen Individuum und Wissen; THISSEN, F.: Verloren in digitalen Netzen - (4) Umsetzungshilfen: STANG, R.: Lerarrangements und Wissensangebote gestalten - (5) Ausblick: PUHL, A.: Aktuelle Forschungsbedarfe
    Footnote
    Enthält auf der beiliegenden CD-ROM alle Materialien des Projektes "Entwicklung und Förderung innovativer weiterbildender Lernarrangements in Kultur- und Weiterbildungseinrichtungen (EFIL)": Recherchen, Untersuchungen, Expertisen, Projektdatenbank. -
    Rez. in: ZfBB 50(2003) H.1, S.58-59 (H.-W. Hoffmann)
    RVK
    AN 65100 Allgemeines / Buch- und Bibliothekswesen, Informationswissenschaft / Bibliothekswesen / Grundlagen des Bibliothekswesens / Begriff, Wesen des Bibliothek
  4. Gehirn, Gedächtnis, neuronale Netze (1996) 0.01
    0.014718467 = product of:
      0.0441554 = sum of:
        0.023454536 = weight(_text_:und in 4661) [ClassicSimilarity], result of:
          0.023454536 = score(doc=4661,freq=4.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.24241515 = fieldWeight in 4661, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4661)
        0.020700864 = product of:
          0.04140173 = sum of:
            0.04140173 = weight(_text_:22 in 4661) [ClassicSimilarity], result of:
              0.04140173 = score(doc=4661,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.2708308 = fieldWeight in 4661, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4661)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Content
    Enthält die Beiträge: EPPING, B.: 100 Millionen Zellen und ein bißchen Strom (Informationsfluß); FLESCHNER, F.: Nicht ganz richtig im Kopf (Hirnschäden); RUBNER, J.: Auf der Spur des kleinen Unterschieds (Hormonhaushalt); REYE, B.: Blinde sehen, Lahme gehen (Neuroimplantate); EBERT, U.: Die Neuro-Industrie (Erkennungsdienst); SCHMITZ, U.: Die Kraft der Gedanken (Mindpower); ZICK, M.: Das Rätsel des Bewußtseins (Geisterjagd); N.N.: Das Beste aus zwei Welten (Online-Medizin); HÄGELE, M.: Literatur-Recherche auf dem Prüfstand (KnowledgeFinder); N.N.: Die Arzthelfer; N.N.: Mit der Stange im Nebel (Internet-Adressen)
    Date
    22. 7.2000 18:45:51
    Footnote
    Die CD enthält: (1) Neuronale Netze zum Ausprobieren; (2) Das Wunder unseres Körpers; (3) Gedächtnistraining und Memotechniken
  5. Vernetztes Wissen - Daten, Menschen, Systeme : 6. Konferenz der Zentralbibliothek Forschungszentrum Jülich. 5. - 7. November 2012 - Proceedingsband: WissKom 2012 (2012) 0.01
    0.014422532 = product of:
      0.043267597 = sum of:
        0.007728611 = weight(_text_:in in 482) [ClassicSimilarity], result of:
          0.007728611 = score(doc=482,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1301535 = fieldWeight in 482, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=482)
        0.035538986 = weight(_text_:und in 482) [ClassicSimilarity], result of:
          0.035538986 = score(doc=482,freq=18.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.3673144 = fieldWeight in 482, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=482)
      0.33333334 = coord(2/6)
    
    Abstract
    Informations- und Wissensvermittlung verlagern sich immer stärker in die digitale Welt. Möglich wird dies nicht zuletzt durch die voranschreitende Durchdringung aller Lebensbereiche durch das Internet. Wissen wird mehr und mehr zu vernetztem Wissen. Die Jülicher Konferenz WissKom2012 thematisiert die Anpassung an diese Entwicklung und ihre Mitgestaltung durch innovative Bibliotheksdienstleistungen. Der Konferenztitel "Vernetztes Wissen: Daten, Menschen, Systeme" deutet die wechselseitige Vernetzung unter- und miteinander an. Ziel ist, vorhandene Insellösungen zu verbinden und neue Konzepte für inhärent vernetzte Strukturen zu entwickeln. Mit der WissKom2012 "Vernetztes Wissen - Daten, Menschen, Systeme" greift die Zentralbibliothek des Forschungszentrums Jülich erneut Themen im Spannungsfeld von "Bibliothek - Information - Wissenschaft" in einer Konferenz interdisziplinär auf und versucht, neue Handlungsfelder für Bibliotheken aufzuzeigen. Diese sechste Konferenz der Zentralbibliothek thematisiert den immer wichtiger werdenden Bereich der Forschungsdaten und den nachhaltigen Umgang mit ihnen. Sie zeigt auf, was Interdisziplinarität konkret bedeutet und wie bislang isolierte Systeme vernetzt werden können und so Mehrwert entsteht. Der Konferenzband enthält neben den Ausführungen der Referenten zudem die Beiträge der Poster Session sowie den Festvortrag von Prof. Viktor Mayer-Schönberger mit dem Titel "Delete: Die Tugend des Vergessens in digitalen Zeiten".
  6. nestor-Handbuch : eine kleine Enzyklopädie der digitalen Langzeitarchivierung (2010) 0.01
    0.0138171725 = product of:
      0.041451517 = sum of:
        0.008879498 = weight(_text_:in in 3716) [ClassicSimilarity], result of:
          0.008879498 = score(doc=3716,freq=22.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.14953499 = fieldWeight in 3716, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3716)
        0.03257202 = weight(_text_:und in 3716) [ClassicSimilarity], result of:
          0.03257202 = score(doc=3716,freq=42.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.3366492 = fieldWeight in 3716, product of:
              6.4807405 = tf(freq=42.0), with freq of:
                42.0 = termFreq=42.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3716)
      0.33333334 = coord(2/6)
    
    Abstract
    Vorwort Stellen Sie sich vor: Wir befinden uns im Jahre 2030 irgendwo in Deutschland. Irgendwo? Nein, bei Ihnen in der guten Stube, wo Sie Ihren Enkelkindern stolz von Ihrer Weltumsegelung aus dem Jahr 2010 berichten. Untermalen möchten Sie Ihre Geschichte gerne mit anschaulichem Bildmaterial und zeitgenössischer Musik. Diese hatte damals wesentlich zur Mythen- und Legendenbildung im Freundeskreis beigetragen, seitdem genießen Sie den Ruf eines unerschrockenen Helden. Nun ist es an der Zeit, diese kleine Geschichte lebendig zu halten und sie der nächsten Generation, nämlich Ihren Enkelkindern, weiterzugeben. Doch Ihr GODD (Global Omnipresent Digital Device) weigert sich, die aufwändig erstellte Videoschau überhaupt zu lesen. Ganz im Gegenteil, Ihnen wird lapidar mitgeteilt, dass es sich um veraltete Technik handelt, die nicht länger unterstützt wird. Sie möchten sich bitte an einen "Datenarchäologen" Ihres Vertrauens wenden. Aber was ist nun eigentlich ein "Datenarchäologe"? Ein Datenarchäologe stellt nicht mehr lesbare Daten wieder her, um sie wieder nutzbar zu machen. Er - oder sie - kommt zum Einsatz, wenn die Havarie schon erfolgt ist. Doch soweit soll es nicht kommen. Deshalb benötigt man Experten wie den "Digital Curator" oder den "Digital Preservation Specialist", der dafür sorgt, dass bereits bei der Entstehung digitaler Daten perspektivisch ihre langfristige Erhaltung berücksichtigt wird. Er - oder sie - ist in der Lage eine Institution bei der Entwicklung ihrer Langzeitarchivierungsstrategie für die erzeugten Daten zu unterstützen oder Entwicklungen in einem vertrauenswürdigen digitalen Langzeitarchivsystem zu planen und durchzuführen.
    Glücklicher als Sie mit Ihren privaten digitalen Daten sind da die Astronomen, wenn sie nach Daten von Himmels-Beobachtungen fahnden, die bereits Jahrzehnte zurückliegen. Obwohl die Bild- und Datenarchive dieser Beobachtungen in vielfältigen und sehr unterschiedlichen Formaten abgespeichert wurden, gibt es immer die Möglichkeit, über geeignete Interface-Verfahren die Originaldaten zu lesen und zu interpretieren. Dies ist der Fall, weil durch das sogenannte Virtuelle Observatorium weltweit die Archive für astronomische Beobachtungen vernetzt und immer in den neuesten digitalen Formaten zugänglich sind, seien es digitale Aufnahmen von Asteroiden, Planetenbewegungen, der Milchstrasse oder auch Simulationen des Urknalls. Selbst Photoplatten von Beginn des 20. Jahrhunderts wurden systematisch digitalisiert und stehen zur Wiederverwendung bereit. So sind ältere und neue digitale Daten und Bilder gemeinsam nutzbar und gewähren einen Blick in das Universum, der sich über weit mehr Wellenlängen erstreckt als die Sinne des Menschen allein wahrnehmen können. Wir freuen uns, Ihnen mit dem nestor Handbuch "Eine kleine Enzyklopädie der digitalen Langzeitarchivierung" den aktuellen Wissensstand über die Langzeitarchivierung digitaler Objekte im Überblick sowie aus vielen Teilbereichen nun auch in gedruckter Form präsentieren zu können. Schon seit Frühjahr 2007 ist das Handbuch in digitaler Version unter http://nestor.sub.uni-goettingen.de/handbuch/ verfügbar und seitdem in mehreren Intervallen aktualisiert worden. Die nun vorliegende Version 2.0 - hier gedruckt und unter o.g. URL auch weiterhin entgeltfrei herunterladbar - wurde neu strukturiert, um neue Themenfelder ergänzt und bislang schon vorhandene Beiträge wurden, wo fachlich geboten, überarbeitet. Aus seiner Entstehung ergibt sich eine gewisse Heterogenität der einzelnen Kapitel untereinander, z.B. bezüglich der Ausführlichkeit des behandelten Themas oder des Schreibstils. Der Herausgeberkreis hat nicht primär das Ziel verfolgt, dies redaktionell lektorierend auszugleichen oder ein insgesamt kohärentes Gesamtwerk vorzulegen. Vielmehr geht es ihm darum, der deutschsprachigen Gemeinschaft eine möglichst aktuelle "Kleine Enzyklopädie der digitalen Langzeitarchivierung" anbieten zu können.
    Die parallel verfügbare entgeltfreie, digitale Version des Handbuchs wird bei Bedarf aktualisiert und erweitert, eine zweite Druckauflage ist bereits geplant. Gerne nehmen wir Ihre Anregungen auf und berücksichtigen sie bei zukünftigen Aktualisierungen! Unser Dank gilt insbesondere den Autorinnen und Autoren, ohne die es nur bei der Idee eines solchen Handbuches geblieben wäre. Mein Dank gilt aber auch den Mitherausgebern dieser Ausgabe, durch deren engagiertes Stimulieren und "Bändigen" der Autoren die vielen Beiträge erst zu einem Gesamtwerk zusammengeführt werden konnten. Zusammen mit allen Beteiligten hoffe ich, dass dieses Handbuch Ihnen hilfreiche Anregungen und Anleitungen zu einem erfolgreichen Einstieg in die Theorie und Praxis der Langzeitarchivierung digitaler Objekte bietet!
  7. Ruhl, M.: Do we need metadata? : an on-line survey in German archives (2012) 0.00
    0.0021859813 = product of:
      0.013115887 = sum of:
        0.013115887 = weight(_text_:in in 471) [ClassicSimilarity], result of:
          0.013115887 = score(doc=471,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.22087781 = fieldWeight in 471, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=471)
      0.16666667 = coord(1/6)
    
    Abstract
    The paper summarizes the results of an on-line survey which was executed 2010 in german archives of all branches. The survey focused on metadata and used metadata standards for the annotation of audiovisual media like pictures, audio and video files (analog and digital). The findings motivate the question whether archives are able to collaborate in projects like europeana if they do not use accepted standards for their orientation. Archives need more resources and archival staff need more training to execute more complex tasks in an digital and semantic surrounding.
    Source
    Proceedings of the 2nd International Workshop on Semantic Digital Archives held in conjunction with the 16th Int. Conference on Theory and Practice of Digital Libraries (TPDL) on September 27, 2012 in Paphos, Cyprus [http://ceur-ws.org/Vol-912/proceedings.pdf]. Eds.: A. Mitschik et al
  8. Voigt, M.; Mitschick, A.; Schulz, J.: Yet another triple store benchmark? : practical experiences with real-world data (2012) 0.00
    0.0021859813 = product of:
      0.013115887 = sum of:
        0.013115887 = weight(_text_:in in 476) [ClassicSimilarity], result of:
          0.013115887 = score(doc=476,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.22087781 = fieldWeight in 476, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=476)
      0.16666667 = coord(1/6)
    
    Abstract
    Although quite a number of RDF triple store benchmarks have already been conducted and published, it appears to be not that easy to find the right storage solution for your particular Semantic Web project. A basic reason is the lack of comprehensive performance tests with real-world data. Confronted with this problem, we setup and ran our own tests with a selection of four up-to-date triple store implementations - and came to interesting findings. In this paper, we briefly present the benchmark setup including the store configuration, the datasets, and the test queries. Based on a set of metrics, our results demonstrate the importance of real-world datasets in identifying anomalies or di?erences in reasoning. Finally, we must state that it is indeed difficult to give a general recommendation as no store wins in every field.
    Source
    Proceedings of the 2nd International Workshop on Semantic Digital Archives held in conjunction with the 16th Int. Conference on Theory and Practice of Digital Libraries (TPDL) on September 27, 2012 in Paphos, Cyprus [http://ceur-ws.org/Vol-912/proceedings.pdf]. Eds.: A. Mitschik et al
  9. Metrics in research : for better or worse? (2016) 0.00
    0.0020609628 = product of:
      0.012365777 = sum of:
        0.012365777 = weight(_text_:in in 3312) [ClassicSimilarity], result of:
          0.012365777 = score(doc=3312,freq=24.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.2082456 = fieldWeight in 3312, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=3312)
      0.16666667 = coord(1/6)
    
    Abstract
    If you are an academic researcher but did not earn (yet) your Nobel prize or your retirement, it is unlikely you never heard about research metrics. These metrics aim at quantifying various aspects of the research process, at the level of individual researchers (e.g. h-index, altmetrics), scientific journals (e.g. impact factors) or entire universities/ countries (e.g. rankings). Although such "measurements" have existed in a simple form for a long time, their widespread calculation was enabled by the advent of the digital era (large amount of data available worldwide in a computer-compatible format). And in this new era, what becomes technically possible will be done, and what is done and appears to simplify our lives will be used. As a result, a rapidly growing number of statistics-based numerical indices are nowadays fed into decisionmaking processes. This is true in nearly all aspects of society (politics, economy, education and private life), and in particular in research, where metrics play an increasingly important role in determining positions, funding, awards, research programs, career choices, reputations, etc.
    Content
    Inhalt: Metrics in Research - For better or worse? / Jozica Dolenc, Philippe Hünenberger Oliver Renn - A brief visual history of research metrics / Oliver Renn, Jozica Dolenc, Joachim Schnabl - Bibliometry: The wizard of O's / Philippe Hünenberger - The grip of bibliometrics - A student perspective / Matthias Tinzl - Honesty and transparency to taxpayers is the long-term fundament for stable university funding / Wendelin J. Stark - Beyond metrics: Managing the performance of your work / Charlie Rapple - Scientific profiling instead of bibliometrics: Key performance indicators of the future / Rafael Ball - More knowledge, less numbers / Carl Philipp Rosenau - Do we really need BIBLIO-metrics to evaluate individual researchers? / Rüdiger Mutz - Using research metrics responsibly and effectively as a researcher / Peter I. Darroch, Lisa H. Colledge - Metrics in research: More (valuable) questions than answers / Urs Hugentobler - Publication of research results: Use and abuse / Wilfred F. van Gunsteren - Wanted: Transparent algorithms, interpretation skills, common sense / Eva E. Wille - Impact factors, the h-index, and citation hype - Metrics in research from the point of view of a journal editor / Renato Zenobi - Rashomon or metrics in a publisher's world / Gabriella Karger - The impact factor and I: A love-hate relationship / Jean-Christophe Leroux - Personal experiences bringing altmetrics to the academic market / Ben McLeish - Fatally attracted by numbers? / Oliver Renn - On computable numbers / Gerd Folkers, Laura Folkers - ScienceMatters - Single observation science publishing and linking observations to create an internet of science / Lawrence Rajendran.
  10. Bozzato, L.; Braghin, S.; Trombetta, A.: ¬A method and guidelines for the cooperation of ontologies and relational databases in Semantic Web applications (2012) 0.00
    0.0019676082 = product of:
      0.011805649 = sum of:
        0.011805649 = weight(_text_:in in 475) [ClassicSimilarity], result of:
          0.011805649 = score(doc=475,freq=14.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.19881277 = fieldWeight in 475, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=475)
      0.16666667 = coord(1/6)
    
    Abstract
    Ontologies are a well-affirmed way of representing complex structured information and they provide a sound conceptual foundation to Semantic Web technologies. On the other hand, a huge amount of information available on the web is stored in legacy relational databases. The issues raised by the collaboration between such worlds are well known and addressed by consolidated mapping languages. Nevertheless, to the best of our knowledge, a best practice for such cooperation is missing: in this work we thus present a method to guide the definition of cooperations between ontology-based and relational databases systems. Our method, mainly based on ideas from knowledge reuse and re-engineering, is aimed at the separation of data between database and ontology instances and at the definition of suitable mappings in both directions, taking advantage of the representation possibilities offered by both models. We present the steps of our method along with guidelines for their application. Finally, we propose an example of its deployment in the context of a large repository of bio-medical images we developed.
    Source
    Proceedings of the 2nd International Workshop on Semantic Digital Archives held in conjunction with the 16th Int. Conference on Theory and Practice of Digital Libraries (TPDL) on September 27, 2012 in Paphos, Cyprus [http://ceur-ws.org/Vol-912/proceedings.pdf]. Eds.: A. Mitschik et al
  11. OWLED 2009; OWL: Experiences and Directions, Sixth International Workshop, Chantilly, Virginia, USA, 23-24 October 2009, Co-located with ISWC 2009. (2009) 0.00
    0.0019449911 = product of:
      0.011669946 = sum of:
        0.011669946 = weight(_text_:in in 3391) [ClassicSimilarity], result of:
          0.011669946 = score(doc=3391,freq=38.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.19652747 = fieldWeight in 3391, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3391)
      0.16666667 = coord(1/6)
    
    Abstract
    The W3C OWL Web Ontology Language has been a W3C recommendation since 2004, and specification of its successor OWL 2 is being finalised. OWL plays an important role in an increasing number and range of applications and as experience using the language grows, new ideas for further extending its reach continue to be proposed. The OWL: Experiences and Direction (OWLED) workshop series is a forum for practitioners in industry and academia, tool developers, and others interested in OWL to describe real and potential applications, to share experience, and to discuss requirements for language extensions and modifications. The workshop will bring users, implementors and researchers together to measure the state of need against the state of the art, and to set an agenda for research and deployment in order to incorporate OWL-based technologies into new applications. This year's 2009 OWLED workshop will be co-located with the Eighth International Semantic Web Conference (ISWC), and the Third International Conference on Web Reasoning and Rule Systems (RR2009). It will be held in Chantilly, VA, USA on October 23 - 24, 2009. The workshop will concentrate on issues related to the development and W3C standardization of OWL 2, and beyond, but other issues related to OWL are also of interest, particularly those related to the task forces set up at OWLED 2007. As usual, the workshop will try to encourage participants to work together and will give space for discussions on various topics, to be decided and published at some point in the future. We ask participants to have a look at these topics and the accepted submissions before the workshop, and to prepare single "slides" that can be presented during these discussions. There will also be formal presentation of submissions to the workshop.
    Content
    Long Papers * Suggestions for OWL 3, Pascal Hitzler. * BestMap: Context-Aware SKOS Vocabulary Mappings in OWL 2, Rinke Hoekstra. * Mechanisms for Importing Modules, Bijan Parsia, Ulrike Sattler and Thomas Schneider. * A Syntax for Rules in OWL 2, Birte Glimm, Matthew Horridge, Bijan Parsia and Peter Patel-Schneider. * PelletSpatial: A Hybrid RCC-8 and RDF/OWL Reasoning and Query Engine, Markus Stocker and Evren Sirin. * The OWL API: A Java API for Working with OWL 2 Ontologies, Matthew Horridge and Sean Bechhofer. * From Justifications to Proofs for Entailments in OWL, Matthew Horridge, Bijan Parsia and Ulrike Sattler. * A Solution for the Man-Man Problem in the Family History Knowledge Base, Dmitry Tsarkov, Ulrike Sattler and Robert Stevens. * Towards Integrity Constraints in OWL, Evren Sirin and Jiao Tao. * Processing OWL2 ontologies using Thea: An application of logic programming, Vangelis Vassiliadis, Jan Wielemaker and Chris Mungall. * Reasoning in Metamodeling Enabled Ontologies, Nophadol Jekjantuk, Gerd Gröner and Jeff Z. Pan.
    Short Papers * A Database Backend for OWL, Jörg Henss, Joachim Kleb and Stephan Grimm. * Unifying SysML and OWL, Henson Graves. * The OWLlink Protocol, Thorsten Liebig, Marko Luther and Olaf Noppens. * A Reasoning Broker Framework for OWL, Juergen Bock, Tuvshintur Tserendorj, Yongchun Xu, Jens Wissmann and Stephan Grimm. * Change Representation For OWL 2 Ontologies, Raul Palma, Peter Haase, Oscar Corcho and Asunción Gómez-Pérez. * Practical Aspects of Query Rewriting for OWL 2, Héctor Pérez-Urbina, Ian Horrocks and Boris Motik. * CSage: Use of a Configurable Semantically Attributed Graph Editor as Framework for Editing and Visualization, Lawrence Levin. * A Conformance Test Suite for the OWL 2 RL/RDF Rules Language and the OWL 2 RDF-Based Semantics, Michael Schneider and Kai Mainzer. * Improving the Data Quality of Relational Databases using OBDA and OWL 2 QL, Olivier Cure. * Temporal Classes and OWL, Natalya Keberle. * Using Ontologies for Medical Image Retrieval - An Experiment, Jasmin Opitz, Bijan Parsia and Ulrike Sattler. * Task Representation and Retrieval in an Ontology-Guided Modelling System, Yuan Ren, Jens Lemcke, Andreas Friesen, Tirdad Rahmani, Srdjan Zivkovic, Boris Gregorcic, Andreas Bartho, Yuting Zhao and Jeff Z. Pan. * A platform for reasoning with OWL-EL knowledge bases in a Peer-to-Peer environment, Alexander De Leon and Michel Dumontier. * Axiomé: a Tool for the Elicitation and Management of SWRL Rules, Saeed Hassanpour, Martin O'Connor and Amar Das. * SQWRL: A Query Language for OWL, Martin O'Connor and Amar Das. * Classifying ELH Ontologies In SQL Databases, Vincent Delaitre and Yevgeny Kazakov. * A Semantic Web Approach to Represent and Retrieve Information in a Corporate Memory, Ana B. Rios-Alvarado, R. Carolina Medina-Ramirez and Ricardo Marcelin-Jimenez. * Towards a Graphical Notation for OWL 2, Elisa Kendall, Roy Bell, Roger Burkhart, Mark Dutra and Evan Wallace.
    Demo/Position Papers * Conjunctive Query Answering in Distributed Ontology Systems for Ontologies with Large OWL ABoxes, Xueying Chen and Michel Dumontier. * Node-Link and Containment Methods in Ontology Visualization, Julia Dmitrieva and Fons J. Verbeek. * A JC3IEDM OWL-DL Ontology, Steven Wartik. * Semantically Enabled Temporal Reasoning in a Virtual Observatory, Patrick West, Eric Rozell, Stephan Zednik, Peter Fox and Deborah L. McGuinness. * Developing an Ontology from the Application Up, James Malone, Tomasz Adamusiak, Ele Holloway, Misha Kapushesky and Helen Parkinson.
  12. Bahls, D.; Scherp, G.; Tochtermann, K.; Hasselbring, W.: Towards a recommender system for statistical research data (2012) 0.00
    0.001821651 = product of:
      0.010929906 = sum of:
        0.010929906 = weight(_text_:in in 474) [ClassicSimilarity], result of:
          0.010929906 = score(doc=474,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.18406484 = fieldWeight in 474, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=474)
      0.16666667 = coord(1/6)
    
    Abstract
    To effectively promote the exchange of scientific data, retrieval services are required to suit the needs of the research community. A large amount of research in the field of economics is based on statistical data, which is often drawn from external sources like data agencies, statistical offices or affiated institutes. Since producing such data for a particular research question is expensive in time and money-if possible at all- research activities are often influenced by the availability of suitable data. Researchers choose or adjust their questions, so that the empirical foundation to support their results is given. As a consequence, researchers look out and poll for newly available data in all sorts of directions due to a lacking information infrastructure for this domain. This circumstance and a recent report from the High Level Expert Group on Scientific Data motivate recommendation and notification services for research data sets. In this paper, we elaborate on a case-based recommender system for statistical data, which allows for precise query specification. We discuss required similarity measures on the basis of cross-domain code lists and propose a system architecture. To address the problem of continuous polling, we elaborate on a notification service to inform researchers on newly avaible data sets based on their personal request.
    Source
    Proceedings of the 2nd International Workshop on Semantic Digital Archives held in conjunction with the 16th Int. Conference on Theory and Practice of Digital Libraries (TPDL) on September 27, 2012 in Paphos, Cyprus [http://ceur-ws.org/Vol-912/proceedings.pdf]. Eds.: A. Mitschik et al
  13. Rauber, A.: Digital preservation in data-driven science : on the importance of process capture, preservation and validation (2012) 0.00
    0.0017848461 = product of:
      0.010709076 = sum of:
        0.010709076 = weight(_text_:in in 469) [ClassicSimilarity], result of:
          0.010709076 = score(doc=469,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.18034597 = fieldWeight in 469, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=469)
      0.16666667 = coord(1/6)
    
    Abstract
    Current digital preservation is strongly biased towards data objects: digital files of document-style objects, or encapsulated and largely self-contained objects. To provide authenticity and provenance information, comprehensive metadata models are deployed to document information on an object's context. Yet, we claim that simply documenting an objects context may not be sufficient to ensure proper provenance and to fulfill the stated preservation goals. Specifically in e-Science and business settings, capturing, documenting and preserving entire processes may be necessary to meet the preservation goals. We thus present an approach for capturing, documenting and preserving processes, and means to assess their authenticity upon re-execution. We will discuss options as well as limitations and open challenges to achieve sound preservation, speci?cally within scientific processes.
    Source
    Proceedings of the 2nd International Workshop on Semantic Digital Archives held in conjunction with the 16th Int. Conference on Theory and Practice of Digital Libraries (TPDL) on September 27, 2012 in Paphos, Cyprus [http://ceur-ws.org/Vol-912/proceedings.pdf]. Eds.: A. Mitschik et al
  14. Proceedings of the 2nd International Workshop on Semantic Digital Archives held in conjunction with the 16th Int. Conference on Theory and Practice of Digital Libraries (TPDL) on September 27, 2012 in Paphos, Cyprus (2012) 0.00
    0.0016695707 = product of:
      0.010017424 = sum of:
        0.010017424 = weight(_text_:in in 468) [ClassicSimilarity], result of:
          0.010017424 = score(doc=468,freq=28.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1686982 = fieldWeight in 468, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0234375 = fieldNorm(doc=468)
      0.16666667 = coord(1/6)
    
    Abstract
    Archival Information Systems (AIS) are becoming increasingly important. For decades, the amount of content created digitally is growing and its complete life cycle nowadays tends to remain digital. A selection of this content is expected to be of value for the future and can thus be considered being part of our cultural heritage. However, digital content poses many challenges for long-term or indefinite preservation, e.g. digital publications become increasingly complex by the embedding of different kinds of multimedia, data in arbitrary formats and software. As soon as these digital publications become obsolete, but are still deemed to be of value in the future, they have to be transferred smoothly into appropriate AIS where they need to be kept accessible even through changing technologies. The successful previous SDA workshop in 2011 showed: Both, the library and the archiving community have made valuable contributions to the management of huge amounts of knowledge and data. However, both are approaching this topic from different views which shall be brought together to cross-fertilize each other. There are promising combinations of pertinence and provenance models since those are traditionally the prevailing knowledge organization principles of the library and archiving community, respectively. Another scientific discipline providing promising technical solutions for knowledge representation and knowledge management is semantic technologies, which is supported by appropriate W3C recommendations and a large user community. At the forefront of making the semantic web a mature and applicable reality is the linked data initiative, which already has started to be adopted by the library community. It can be expected that using semantic (web) technologies in general and linked data in particular can mature the area of digital archiving as well as technologically tighten the natural bond between digital libraries and digital archives. Semantic representations of contextual knowledge about cultural heritage objects will enhance organization and access of data and knowledge. In order to achieve a comprehensive investigation, the information seeking and document triage behaviors of users (an area also classified under the field of Human Computer Interaction) will also be included in the research.
    One of the major challenges of digital archiving is how to deal with changing technologies and changing user communities. On the one hand software, hardware and (multimedia) data formats that become obsolete and are not supported anymore still need to be kept accessible. On the other hand changing user communities necessitate technical means to formalize, detect and measure knowledge evolution. Furthermore, digital archival records are usually not deleted from the AIS and therefore, the amount of digitally archived (multimedia) content can be expected to grow rapidly. Therefore, efficient storage management solutions geared to the fact that cultural heritage is not as frequently accessed like up-to-date content residing in a digital library are required. Software and hardware needs to be tightly connected based on sophisticated knowledge representation and management models in order to face that challenge. In line with the above, contributions to the workshop should focus on, but are not limited to:
    Semantic search & semantic information retrieval in digital archives and digital libraries Semantic multimedia archives Ontologies & linked data for digital archives and digital libraries Ontologies & linked data for multimedia archives Implementations and evaluations of semantic digital archives Visualization and exploration of digital content User interfaces for semantic digital libraries User interfaces for intelligent multimedia information retrieval User studies focusing on end-user needs and information seeking behavior of end-users Theoretical and practical archiving frameworks using Semantic (Web) technologies Logical theories for digital archives Semantic (Web) services implementing the OAIS standard Semantic or logical provenance models for digital archives or digital libraries Information integration/semantic ingest (e.g. from digital libraries) Trust for ingest and data security/integrity check for long-term storage of archival records Semantic extensions of emulation/virtualization methodologies tailored for digital archives Semantic long-term storage and hardware organization tailored for AIS Migration strategies based on Semantic (Web) technologies Knowledge evolution We expect new insights and results for sustainable technical solutions for digital archiving using knowledge management techniques based on semantic technologies. The workshop emphasizes interdisciplinarity and aims at an audience consisting of scientists and scholars from the digital library, digital archiving, multimedia technology and semantic web community, the information and library sciences, as well as, from the social sciences and (digital) humanities, in particular people working on the mentioned topics. We encourage end-users, practitioners and policy-makers from cultural heritage institutions to participate as well.
  15. Networked knowledge organization systems (2001) 0.00
    0.0015457221 = product of:
      0.009274333 = sum of:
        0.009274333 = weight(_text_:in in 6473) [ClassicSimilarity], result of:
          0.009274333 = score(doc=6473,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1561842 = fieldWeight in 6473, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=6473)
      0.16666667 = coord(1/6)
    
    Abstract
    Knowledge Organization Systems can comprise thesauri and other controlled lists of keywords, ontologies, classification systems, clustering approaches, taxonomies, gazetteers, dictionaries, lexical databases, concept maps/spaces, semantic road maps, etc. These schemas enable knowledge structuring and management, knowledge-based data processing and systematic access to knowledge structures in individual collections and digital libraries. Used as interactive information services on the Internet they have an increased potential to support the description, discovery and retrieval of heterogeneous information resources and to contribute to an overall resource discovery infrastructure
    Content
    This issue of the Journal of Digital Information evolved from a workshop on Networked Knowledge Organization Systems (NKOS) held at the Fourth European Conference on Research and Advanced Technology for Digital Libraries (ECDL2000) in Lisbon during September 2000. The focus of the workshop was European NKOS initiatives and projects and options for global cooperation. Workshop organizers were Martin Doerr, Traugott Koch, Dougles Tudhope and Repke de Vries. This group has, with Traugott Koch as the main editor and with the help of Linda Hill, cooperated in the editorial tasks for this special issue
  16. Alexiev, V.: Implementing CIDOC CRM search based on fundamental relations and OWLIM rules (2012) 0.00
    0.0014873719 = product of:
      0.008924231 = sum of:
        0.008924231 = weight(_text_:in in 467) [ClassicSimilarity], result of:
          0.008924231 = score(doc=467,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.15028831 = fieldWeight in 467, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=467)
      0.16666667 = coord(1/6)
    
    Abstract
    The CIDOC CRM provides an ontology for describing entities, properties and relationships appearing in cultural heritage (CH) documentation, history and archeology. CRM promotes shared understanding by providing an extensible semantic framework that any CH information can be mapped to. CRM data is usually represented in semantic web format (RDF) and comprises complex graphs of nodes and properties. An important question is how a user can search through such complex graphs, since the number of possible combinations is staggering. One approach "compresses" the semantic network by mapping many CRM entity classes to a few "Fundamental Concepts" (FC), and mapping whole networks of CRM properties to fewer "Fundamental Relations" (FR). These FC and FRs serve as a "search index" over the CRM semantic web and allow the user to use a simpler query vocabulary. We describe an implementation of CRM FR Search based on OWLIM Rules, done as part of the ResearchSpace (RS) project. We describe the technical details, problems and difficulties encountered, benefits and disadvantages of using OWLIM rules, and preliminary performance results. We provide implementation experience that can be valuable for further implementation, definition and maintenance of CRM FRs.
    Source
    Proceedings of the 2nd International Workshop on Semantic Digital Archives held in conjunction with the 16th Int. Conference on Theory and Practice of Digital Libraries (TPDL) on September 27, 2012 in Paphos, Cyprus [http://ceur-ws.org/Vol-912/proceedings.pdf]. Eds.: A. Mitschik et al
  17. Dietze, S.; Maynard, D.; Demidova, E.; Risse, T.; Stavrakas, Y.: Entity extraction and consolidation for social Web content preservation (2012) 0.00
    0.0014873719 = product of:
      0.008924231 = sum of:
        0.008924231 = weight(_text_:in in 470) [ClassicSimilarity], result of:
          0.008924231 = score(doc=470,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.15028831 = fieldWeight in 470, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=470)
      0.16666667 = coord(1/6)
    
    Abstract
    With the rapidly increasing pace at which Web content is evolving, particularly social media, preserving the Web and its evolution over time becomes an important challenge. Meaningful analysis of Web content lends itself to an entity-centric view to organise Web resources according to the information objects related to them. Therefore, the crucial challenge is to extract, detect and correlate entities from a vast number of heterogeneous Web resources where the nature and quality of the content may vary heavily. While a wealth of information extraction tools aid this process, we believe that, the consolidation of automatically extracted data has to be treated as an equally important step in order to ensure high quality and non-ambiguity of generated data. In this paper we present an approach which is based on an iterative cycle exploiting Web data for (1) targeted archiving/crawling of Web objects, (2) entity extraction, and detection, and (3) entity correlation. The long-term goal is to preserve Web content over time and allow its navigation and analysis based on well-formed structured RDF data about entities.
    Source
    Proceedings of the 2nd International Workshop on Semantic Digital Archives held in conjunction with the 16th Int. Conference on Theory and Practice of Digital Libraries (TPDL) on September 27, 2012 in Paphos, Cyprus [http://ceur-ws.org/Vol-912/proceedings.pdf]. Eds.: A. Mitschik et al
  18. Wartena, C.; Sommer, M.: Automatic classification of scientific records using the German Subject Heading Authority File (SWD) (2012) 0.00
    0.0014873719 = product of:
      0.008924231 = sum of:
        0.008924231 = weight(_text_:in in 472) [ClassicSimilarity], result of:
          0.008924231 = score(doc=472,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.15028831 = fieldWeight in 472, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=472)
      0.16666667 = coord(1/6)
    
    Abstract
    The following paper deals with an automatic text classification method which does not require training documents. For this method the German Subject Heading Authority File (SWD), provided by the linked data service of the German National Library is used. Recently the SWD was enriched with notations of the Dewey Decimal Classification (DDC). In consequence it became possible to utilize the subject headings as textual representations for the notations of the DDC. Basically, we we derive the classification of a text from the classification of the words in the text given by the thesaurus. The method was tested by classifying 3826 OAI-Records from 7 different repositories. Mean reciprocal rank and recall were chosen as evaluation measure. Direct comparison to a machine learning method has shown that this method is definitely competitive. Thus we can conclude that the enriched version of the SWD provides high quality information with a broad coverage for classification of German scientific articles.
    Source
    Proceedings of the 2nd International Workshop on Semantic Digital Archives held in conjunction with the 16th Int. Conference on Theory and Practice of Digital Libraries (TPDL) on September 27, 2012 in Paphos, Cyprus [http://ceur-ws.org/Vol-912/proceedings.pdf]. Eds.: A. Mitschik et al
  19. Grassi, M.; Morbidoni, C.; Nucci, M.; Fonda, S.; Ledda, G.: Pundit: semantically structured annotations for Web contents and digital libraries (2012) 0.00
    0.0014873719 = product of:
      0.008924231 = sum of:
        0.008924231 = weight(_text_:in in 473) [ClassicSimilarity], result of:
          0.008924231 = score(doc=473,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.15028831 = fieldWeight in 473, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=473)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper introduces Pundit: a novel semantic annotation tool that allows users to create structured data while annotating Web pages relying on stand-off mark-up techniques. Pundit provides support for different types of annotations, ranging from simple comments to semantic links to Web of data entities and fine granular cross-references and citations. In addition, it can be configured to include custom controlled vocabularies and has been designed to enable groups of users to share their annotations and collaboratively create structured knowledge. Pundit allows creating semantically typed relations among heterogeneous resources, both having different multimedia formats and belonging to different pages and domains. In this way, annotations can reinforce existing data connections or create new ones and augment original information generating new semantically structured aggregations of knowledge. These can later be exploited both by other users to better navigate DL and Web content, and by applications to improve data management.
    Source
    Proceedings of the 2nd International Workshop on Semantic Digital Archives held in conjunction with the 16th Int. Conference on Theory and Practice of Digital Libraries (TPDL) on September 27, 2012 in Paphos, Cyprus [http://ceur-ws.org/Vol-912/proceedings.pdf]. Eds.: A. Mitschik et al
  20. ¬Third International World Wide Web Conference, Darmstadt 1995 : [Inhaltsverzeichnis] (1995) 0.00
    8.9242304E-4 = product of:
      0.005354538 = sum of:
        0.005354538 = weight(_text_:in in 3458) [ClassicSimilarity], result of:
          0.005354538 = score(doc=3458,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.09017298 = fieldWeight in 3458, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3458)
      0.16666667 = coord(1/6)
    
    Abstract
    ANDREW, K. u. F. KAPPE: Serving information to the Web with Hyper-G; BARBIERI, K., H.M. DOERR u. D. DWYER: Creating a virtual classroom for interactive education on the Web; CAMPBELL, J.K., S.B. JONES, N.M. STEPHENS u. S. HURLEY: Constructing educational courseware using NCSA Mosaic and the World Wide Web; CATLEDGE, L.L. u. J.E. PITKOW: Characterizing browsing strategies in the World-Wide Web; CLAUSNITZER, A. u. P. VOGEL: A WWW interface to the OMNIS/Myriad literature retrieval engine; FISCHER, R. u. L. PERROCHON: IDLE: Unified W3-access to interactive information servers; FOLEY, J.D.: Visualizing the World-Wide Web with the navigational view builder; FRANKLIN, S.D. u. B. IBRAHIM: Advanced educational uses of the World-Wide Web; FUHR, N., U. PFEIFER u. T. HUYNH: Searching structured documents with the enhanced retrieval functionality of free WAIS-sf and SFgate; FIORITO, M., J. OKSANEN u. D.R. IOIVANE: An educational environment using WWW; KENT, R.E. u. C. NEUSS: Conceptual analysis of resource meta-information; SHELDON, M.A. u. R. WEISS: Discover: a resource discovery system based on content routing; WINOGRAD, T.: Beyond browsing: shared comments, SOAPs, Trails, and On-line communities