Search (8 results, page 1 of 1)

  • × author_ss:"Zapilko, B."
  1. Wilde, A.; Wenninger, A.; Hopt, O.; Schaer, P.; Zapilko, B.: Aktivitäten von GESIS im Kontext von Open Data und Zugang zu sozialwissenschaftlichen Forschungsergebnissen (2010) 0.04
    0.03752516 = product of:
      0.0938129 = sum of:
        0.044986445 = weight(_text_:web in 4275) [ClassicSimilarity], result of:
          0.044986445 = score(doc=4275,freq=4.0), product of:
            0.14703658 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.04505473 = queryNorm
            0.3059541 = fieldWeight in 4275, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4275)
        0.048826456 = product of:
          0.09765291 = sum of:
            0.09765291 = weight(_text_:server in 4275) [ClassicSimilarity], result of:
              0.09765291 = score(doc=4275,freq=2.0), product of:
                0.25762302 = queryWeight, product of:
                  5.7180014 = idf(docFreq=394, maxDocs=44218)
                  0.04505473 = queryNorm
                0.37905353 = fieldWeight in 4275, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7180014 = idf(docFreq=394, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4275)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    GESIS - Leibniz-Institut für Sozialwissenschaften betreibt mit dem Volltext-Server SSOAR und der Registrierungsagentur für sozialwissenschaftliche Forschungsdaten dalra zwei Plattformen zum Nachweis von wissenschaftlichen Ergebnissen in Form von Publikationen und Primärdaten. Beide Systeme setzen auf einen konsequenten Einsatz von Persistenten Identifikatoren (URN und DOI), was die Verknüpfung der durch dalra registrierten Daten mit den Volltextdokumenten aus SSOAR sowie anderen Informationen aus den GESIS-Beständen ermöglicht. Zusätzlich wird durch den Einsatz von semantischen Technologien wie SKOS und RDF eine Verbindung zum Semantic Web hergestellt.
    Source
    Semantic web & linked data: Elemente zukünftiger Informationsinfrastrukturen ; 1. DGI-Konferenz ; 62. Jahrestagung der DGI ; Frankfurt am Main, 7. - 9. Oktober 2010 ; Proceedings / Deutsche Gesellschaft für Informationswissenschaft und Informationspraxis. Hrsg.: M. Ockenfeld
  2. Zapilko, B.; Sure, Y.: Neue Möglichkeiten für die Wissensorganisation durch die Kombination von Digital Library Verfahren mit Standards des Semantic Web (2013) 0.01
    0.014225964 = product of:
      0.07112982 = sum of:
        0.07112982 = weight(_text_:web in 936) [ClassicSimilarity], result of:
          0.07112982 = score(doc=936,freq=10.0), product of:
            0.14703658 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.04505473 = queryNorm
            0.48375595 = fieldWeight in 936, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=936)
      0.2 = coord(1/5)
    
    Abstract
    Entwicklungen und Technologien des Semantic Web treffen seit einigen Jahren verstärkt auf die Bibliotheks- und Dokumentationswelt, um das dort seit Jahrzehnten gesammelte und gepflegte Wissen für das Web und seine Nurzer zugänglich und weiter verarbeitbar zu machen. Dabei können beide Lager von einer Öffnung gegenüber den Verfahren des jeweils anderen und den daraus resultierenden Möglichkeiten, beispielsweise einer integrierten Recherche in verteilten und semantisch angereicherten Dokumentbeständen oder der Anreicherung eigener Bestände um Inhalte anderer, frei verfügbarer Bestände, profitieren. Dieses Paper stellt die Reformulierung eines gängigen informationswissenschaftlichen Verfahrens aus der Dokumentations- und Bibliothekswelt, des sogenannten SchalenmodeIls, vor und zeigt neues Potenzial und neue Möglichkeiten für die Wissensorganisation auf, die durch dessen Anwendung im Semantic Web entstehen können. Darüber hinaus werden erste praktische Ergebnisse der Vorarbeiten dieser Reformulierung präsentiert, die Transformation eines Thesaurus ins SKOS-Format.
    Theme
    Semantic Web
  3. Stempfhuber, M.; Zapilko, B.: Modelling text-fact-integration in digital libraries (2009) 0.01
    0.011726889 = product of:
      0.05863444 = sum of:
        0.05863444 = weight(_text_:wide in 3393) [ClassicSimilarity], result of:
          0.05863444 = score(doc=3393,freq=2.0), product of:
            0.19962662 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.04505473 = queryNorm
            0.29372054 = fieldWeight in 3393, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=3393)
      0.2 = coord(1/5)
    
    Abstract
    Digital Libraries currently face the challenge of integrating many different types of research information (e.g. publications, primary data, expert's profiles, institutional profiles, project information etc.) according to their scientific users' needs. To date no general, integrated model for knowledge organization and retrieval in Digital Libraries exists. This causes the problem of structural and semantic heterogeneity due to the wide range of metadata standards, indexing vocabularies and indexing approaches used for different types of information. The research presented in this paper focuses on areas in which activities are being undertaken in the field of Digital Libraries in order to treat semantic interoperability problems. We present a model for the integrated retrieval of factual and textual data which combines multiple approaches to semantic interoperability und sets them into context. Embedded in the research cycle, traditional content indexing methods for publications meet the newer, but rarely used ontology-based approaches which seem to be better suited for representing complex information like the one contained in survey data. The benefits of our model are (1) easy re-use of available knowledge organisation systems and (2) reduced efforts for domain modelling with ontologies.
  4. Mayr, P.; Zapilko, B.; Sure, Y.: ¬Ein Mehr-Thesauri-Szenario auf Basis von SKOS und Crosskonkordanzen (2010) 0.01
    0.011019385 = product of:
      0.055096924 = sum of:
        0.055096924 = weight(_text_:web in 3392) [ClassicSimilarity], result of:
          0.055096924 = score(doc=3392,freq=6.0), product of:
            0.14703658 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.04505473 = queryNorm
            0.37471575 = fieldWeight in 3392, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3392)
      0.2 = coord(1/5)
    
    Abstract
    Im August 2009 wurde SKOS "Simple Knowledge Organization System" als neuer Standard für web-basierte kontrollierte Vokabulare durch das W3C veröffentlicht1. SKOS dient als Datenmodell, um kontrollierte Vokabulare über das Web anzubieten sowie technisch und semantisch interoperabel zu machen. Perspektivisch kann die heterogene Landschaft der Erschließungsvokabulare über SKOS vereinheitlicht und vor allem die Inhalte der klassischen Datenbanken (Bereich Fachinformation) für Anwendungen des Semantic Web, beispielsweise als Linked Open Data2 (LOD), zugänglich und stär-ker miteinander vernetzt werden. Vokabulare im SKOS-Format können dabei eine relevante Funktion einnehmen, indem sie als standardisiertes Brückenvokabular dienen und semantische Verlinkung zwischen erschlossenen, veröffentlichten Daten herstellen. Die folgende Fallstudie skizziert ein Szenario mit drei thematisch verwandten Thesauri, die ins SKOS-Format übertragen und inhaltlich über Crosskonkordanzen aus dem Projekt KoMoHe verbunden werden. Die Mapping Properties von SKOS bieten dazu standardisierte Relationen, die denen der Crosskonkordanzen entsprechen. Die beteiligten Thesauri der Fallstudie sind a) TheSoz (Thesaurus Sozialwissenschaften, GESIS), b) STW (Standard-Thesaurus Wirtschaft, ZBW) und c) IBLK-Thesaurus (SWP).
  5. Kempf, A.O.; Ritze, D.; Eckert, K.; Zapilko, B.: New ways of mapping knowledge organization systems : using a semi­automatic matching­procedure for building up vocabulary crosswalks (2013) 0.01
    0.0063620443 = product of:
      0.03181022 = sum of:
        0.03181022 = weight(_text_:web in 989) [ClassicSimilarity], result of:
          0.03181022 = score(doc=989,freq=2.0), product of:
            0.14703658 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.04505473 = queryNorm
            0.21634221 = fieldWeight in 989, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=989)
      0.2 = coord(1/5)
    
    Abstract
    Crosswalks between different vocabularies are an indispensable prerequisite for integrated and high quality search scenarios in distributed data environments. Offered through the web and linked with each other they act as a central link so that users can move back and forth between different data sources available online. In the past, crosswalks between different thesauri have primarily been developed manually. In the long run the intellectual updating of such crosswalks requires huge personnel expenses. Therefore, an integration of automatic matching procedures, as for example Ontology Matching Tools, seems an obvious need. On the basis of computer generated correspondences between the Thesaurus for Economics (STW) and the Thesaurus for the Social Sciences (TheSoz) our contribution will explore cross-border approaches between IT-assisted tools and procedures on the one hand and external quality measurements via domain experts on the other hand. The techniques that emerge enable semi-automatically performed vocabulary crosswalks.
  6. Kempf, A.O.; Ritze, D.; Eckert, K.; Zapilko, B.: New ways of mapping knowledge organization systems : using a semi-automatic matching procedure for building up vocabulary crosswalks (2014) 0.01
    0.0053017037 = product of:
      0.026508518 = sum of:
        0.026508518 = weight(_text_:web in 1371) [ClassicSimilarity], result of:
          0.026508518 = score(doc=1371,freq=2.0), product of:
            0.14703658 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.04505473 = queryNorm
            0.18028519 = fieldWeight in 1371, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1371)
      0.2 = coord(1/5)
    
    Abstract
    Crosswalks between different vocabularies are an indispensable prerequisite for integrated, high-quality search scenarios in distributed data environments where more than one controlled vocabulary is in use. Offered through the web and linked with each other they act as a central link so that users can move back and forth between different online data sources. In the past, crosswalks between different thesauri have usually been developed manually. In the long run the intellectual updating of such crosswalks is expensive. An obvious solution would be to apply automatic matching procedures, such as the so-called ontology matching tools. On the basis of computer-generated correspondences between the Thesaurus for the Social Sciences (TSS) and the Thesaurus for Economics (STW), our contribution explores the trade-off between IT-assisted tools and procedures on the one hand and external quality evaluation by domain experts on the other hand. This paper presents techniques for semi-automatic development and maintenance of vocabulary crosswalks. The performance of multiple matching tools was first evaluated against a reference set of correct mappings, then the tools were used to generate new mappings. It was concluded that the ontology matching tools can be used effectively to speed up the work of domain experts. By optimizing the workflow, the method promises to facilitate sustained updating of high-quality vocabulary crosswalks.
  7. Zapilko, B.: Dynamisches Browsing im Kontext von Informationsarchitekturen (2010) 0.00
    0.0036625762 = product of:
      0.01831288 = sum of:
        0.01831288 = product of:
          0.03662576 = sum of:
            0.03662576 = weight(_text_:22 in 3744) [ClassicSimilarity], result of:
              0.03662576 = score(doc=3744,freq=2.0), product of:
                0.15777399 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04505473 = queryNorm
                0.23214069 = fieldWeight in 3744, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3744)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
  8. Kempf, A.O.; Zapilko, B.: Normdatenpflege in Zeiten der Automatisierung : Erstellung und Evaluation automatisch aufgebauter Thesaurus-Crosskonkordanzen (2013) 0.00
    0.0036625762 = product of:
      0.01831288 = sum of:
        0.01831288 = product of:
          0.03662576 = sum of:
            0.03662576 = weight(_text_:22 in 1021) [ClassicSimilarity], result of:
              0.03662576 = score(doc=1021,freq=2.0), product of:
                0.15777399 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04505473 = queryNorm
                0.23214069 = fieldWeight in 1021, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1021)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    18. 8.2013 12:53:22