Search (8 results, page 1 of 1)

  • × author_ss:"Sure, Y."
  1. Sure, Y.; Tempich, C.: Wissensvernetzung in Organisationen (2006) 0.03
    0.028273331 = product of:
      0.10602499 = sum of:
        0.026077118 = weight(_text_:und in 5800) [ClassicSimilarity], result of:
          0.026077118 = score(doc=5800,freq=12.0), product of:
            0.07245795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.032692216 = queryNorm
            0.35989314 = fieldWeight in 5800, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=5800)
        0.045945212 = weight(_text_:zur in 5800) [ClassicSimilarity], result of:
          0.045945212 = score(doc=5800,freq=10.0), product of:
            0.100663416 = queryWeight, product of:
              3.079125 = idf(docFreq=5528, maxDocs=44218)
              0.032692216 = queryNorm
            0.45642412 = fieldWeight in 5800, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.079125 = idf(docFreq=5528, maxDocs=44218)
              0.046875 = fieldNorm(doc=5800)
        0.009822378 = weight(_text_:in in 5800) [ClassicSimilarity], result of:
          0.009822378 = score(doc=5800,freq=12.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.22087781 = fieldWeight in 5800, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=5800)
        0.024180276 = weight(_text_:der in 5800) [ClassicSimilarity], result of:
          0.024180276 = score(doc=5800,freq=10.0), product of:
            0.073026784 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.032692216 = queryNorm
            0.3311152 = fieldWeight in 5800, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.046875 = fieldNorm(doc=5800)
      0.26666668 = coord(4/15)
    
    Abstract
    Das richtige Wissen zur richtigen Zeit zur Verfügung zu stellen, ist eines der Hauptziele im Wissensmanagement. Wissensmodellierung mit Ontologien bietet Lösungen für viele der dabei zu bewältigenden Aufgaben wie z. B. der Vernetzung von unterschiedlichen Wissensträgern und Wissensquellen und hat sich als integraler Bestandteil zahlreicher Wissensmanagementanwendungen etabliert. Getrieben durch neue Organisationsparadigmen wie z. B. Virtuelle Organisationen und neue Kommunikationsparadigmen wie z. B. Peer-To-Peer gewinnt dezentrales Wissensmanagement zunehmend an Bedeutung. Insbesondere gibt es in solchen Umgebungen auch neue Herausforderungen für die Modellierung von Wissen wie z. B. Einigung bei der Modellierung in verteilten Umgebungen. In diesem Kapitel wird die Methodik DILIGENT zur Erstellung, Wartung und Pflege von Ontologien in verteilten und dynamischen Umgebungen vorgestellt und anhand praktischer Beispiele veranschaulicht. Neben dem zugrunde liegenden Prozessmodell zur Wissensmodellierung wird schwerpunktartig die Unterstützung von Argumentationen während der Wissensmodellierung in verteilten Umgebungen beleuchtet, welche den dezentralen Einigungsprozess unterstützt.
    Source
    Semantic Web: Wege zur vernetzten Wissensgesellschaft. Hrsg.: T. Pellegrini, u. A. Blumauer
  2. Krötzsch, M.; Hitzler, P.; Ehrig, M.; Sure, Y.: Category theory in ontology research : concrete gain from an abstract approach (2004 (?)) 0.02
    0.023783326 = product of:
      0.089187466 = sum of:
        0.02783884 = weight(_text_:23 in 4538) [ClassicSimilarity], result of:
          0.02783884 = score(doc=4538,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.23759183 = fieldWeight in 4538, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.046875 = fieldNorm(doc=4538)
        0.02783884 = weight(_text_:23 in 4538) [ClassicSimilarity], result of:
          0.02783884 = score(doc=4538,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.23759183 = fieldWeight in 4538, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.046875 = fieldNorm(doc=4538)
        0.02783884 = weight(_text_:23 in 4538) [ClassicSimilarity], result of:
          0.02783884 = score(doc=4538,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.23759183 = fieldWeight in 4538, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.046875 = fieldNorm(doc=4538)
        0.005670953 = weight(_text_:in in 4538) [ClassicSimilarity], result of:
          0.005670953 = score(doc=4538,freq=4.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.12752387 = fieldWeight in 4538, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4538)
      0.26666668 = coord(4/15)
    
    Abstract
    The focus of research on representing and reasoning with knowledge traditionally has been on single specifications and appropriate inference paradigms to draw conclusions from such data. Accordingly, this is also an essential aspect of ontology research which has received much attention in recent years. But ontologies introduce another new challenge based on the distributed nature of most of their applications, which requires to relate heterogeneous ontological specifications and to integrate information from multiple sources. These problems have of course been recognized, but many current approaches still lack the deep formal backgrounds on which todays reasoning paradigms are already founded. Here we propose category theory as a well-explored and very extensive mathematical foundation for modelling distributed knowledge. A particular prospect is to derive conclusions from the structure of those distributed knowledge bases, as it is for example needed when merging ontologies
    Date
    21.10.2018 15:23:19
  3. Zapilko, B.; Sure, Y.: Neue Möglichkeiten für die Wissensorganisation durch die Kombination von Digital Library Verfahren mit Standards des Semantic Web (2013) 0.01
    0.013589372 = product of:
      0.06794686 = sum of:
        0.033665415 = weight(_text_:und in 936) [ClassicSimilarity], result of:
          0.033665415 = score(doc=936,freq=20.0), product of:
            0.07245795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.032692216 = queryNorm
            0.46462005 = fieldWeight in 936, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=936)
        0.005670953 = weight(_text_:in in 936) [ClassicSimilarity], result of:
          0.005670953 = score(doc=936,freq=4.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.12752387 = fieldWeight in 936, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=936)
        0.028610492 = weight(_text_:der in 936) [ClassicSimilarity], result of:
          0.028610492 = score(doc=936,freq=14.0), product of:
            0.073026784 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.032692216 = queryNorm
            0.3917808 = fieldWeight in 936, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.046875 = fieldNorm(doc=936)
      0.2 = coord(3/15)
    
    Abstract
    Entwicklungen und Technologien des Semantic Web treffen seit einigen Jahren verstärkt auf die Bibliotheks- und Dokumentationswelt, um das dort seit Jahrzehnten gesammelte und gepflegte Wissen für das Web und seine Nurzer zugänglich und weiter verarbeitbar zu machen. Dabei können beide Lager von einer Öffnung gegenüber den Verfahren des jeweils anderen und den daraus resultierenden Möglichkeiten, beispielsweise einer integrierten Recherche in verteilten und semantisch angereicherten Dokumentbeständen oder der Anreicherung eigener Bestände um Inhalte anderer, frei verfügbarer Bestände, profitieren. Dieses Paper stellt die Reformulierung eines gängigen informationswissenschaftlichen Verfahrens aus der Dokumentations- und Bibliothekswelt, des sogenannten SchalenmodeIls, vor und zeigt neues Potenzial und neue Möglichkeiten für die Wissensorganisation auf, die durch dessen Anwendung im Semantic Web entstehen können. Darüber hinaus werden erste praktische Ergebnisse der Vorarbeiten dieser Reformulierung präsentiert, die Transformation eines Thesaurus ins SKOS-Format.
    Series
    Fortschritte in der Wissensorganisation; Bd.12
    Source
    Wissen - Wissenschaft - Organisation: Proceedings der 12. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation Bonn, 19. bis 21. Oktober 2009. Hrsg.: H.P. Ohly
  4. Mayr, P.; Mutschke, P.; Petras, V.; Schaer, P.; Sure, Y.: Applying science models for search (2010) 0.01
    0.010563447 = product of:
      0.052817233 = sum of:
        0.020074176 = weight(_text_:und in 4663) [ClassicSimilarity], result of:
          0.020074176 = score(doc=4663,freq=4.0), product of:
            0.07245795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.032692216 = queryNorm
            0.27704588 = fieldWeight in 4663, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=4663)
        0.027396431 = weight(_text_:zur in 4663) [ClassicSimilarity], result of:
          0.027396431 = score(doc=4663,freq=2.0), product of:
            0.100663416 = queryWeight, product of:
              3.079125 = idf(docFreq=5528, maxDocs=44218)
              0.032692216 = queryNorm
            0.27215877 = fieldWeight in 4663, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.079125 = idf(docFreq=5528, maxDocs=44218)
              0.0625 = fieldNorm(doc=4663)
        0.005346625 = weight(_text_:in in 4663) [ClassicSimilarity], result of:
          0.005346625 = score(doc=4663,freq=2.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.120230645 = fieldWeight in 4663, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=4663)
      0.2 = coord(3/15)
    
    Abstract
    The paper proposes three different kinds of science models as value-added services that are integrated in the retrieval process to enhance retrieval quailty. The paper discusses the approaches Search Term Recommendation, Bradfordizing and Author Centrality on a general level and addresses implementation issues of the models within a real-life retrieval environment.
    Series
    Schriften zur Informationswissenschaft; Bd.58
    Source
    Information und Wissen: global, sozial und frei? Proceedings des 12. Internationalen Symposiums für Informationswissenschaft (ISI 2011) ; Hildesheim, 9. - 11. März 2011. Hrsg.: J. Griesbaum, T. Mandl u. C. Womser-Hacker
  5. Mayr, P.; Zapilko, B.; Sure, Y.: ¬Ein Mehr-Thesauri-Szenario auf Basis von SKOS und Crosskonkordanzen (2010) 0.01
    0.006898502 = product of:
      0.05173876 = sum of:
        0.030111264 = weight(_text_:und in 3392) [ClassicSimilarity], result of:
          0.030111264 = score(doc=3392,freq=16.0), product of:
            0.07245795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.032692216 = queryNorm
            0.41556883 = fieldWeight in 3392, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=3392)
        0.021627497 = weight(_text_:der in 3392) [ClassicSimilarity], result of:
          0.021627497 = score(doc=3392,freq=8.0), product of:
            0.073026784 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.032692216 = queryNorm
            0.29615843 = fieldWeight in 3392, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.046875 = fieldNorm(doc=3392)
      0.13333334 = coord(2/15)
    
    Abstract
    Im August 2009 wurde SKOS "Simple Knowledge Organization System" als neuer Standard für web-basierte kontrollierte Vokabulare durch das W3C veröffentlicht1. SKOS dient als Datenmodell, um kontrollierte Vokabulare über das Web anzubieten sowie technisch und semantisch interoperabel zu machen. Perspektivisch kann die heterogene Landschaft der Erschließungsvokabulare über SKOS vereinheitlicht und vor allem die Inhalte der klassischen Datenbanken (Bereich Fachinformation) für Anwendungen des Semantic Web, beispielsweise als Linked Open Data2 (LOD), zugänglich und stär-ker miteinander vernetzt werden. Vokabulare im SKOS-Format können dabei eine relevante Funktion einnehmen, indem sie als standardisiertes Brückenvokabular dienen und semantische Verlinkung zwischen erschlossenen, veröffentlichten Daten herstellen. Die folgende Fallstudie skizziert ein Szenario mit drei thematisch verwandten Thesauri, die ins SKOS-Format übertragen und inhaltlich über Crosskonkordanzen aus dem Projekt KoMoHe verbunden werden. Die Mapping Properties von SKOS bieten dazu standardisierte Relationen, die denen der Crosskonkordanzen entsprechen. Die beteiligten Thesauri der Fallstudie sind a) TheSoz (Thesaurus Sozialwissenschaften, GESIS), b) STW (Standard-Thesaurus Wirtschaft, ZBW) und c) IBLK-Thesaurus (SWP).
    Footnote
    Beitrag für: 25. Oberhofer Kolloquium 2010: Recherche im Google-Zeitalter - Vollständig und präzise!?.
  6. Sure, Y.; Studer, R.: ¬A methodology for ontology-based knowledge management (2004) 0.00
    6.5482524E-4 = product of:
      0.009822378 = sum of:
        0.009822378 = weight(_text_:in in 4400) [ClassicSimilarity], result of:
          0.009822378 = score(doc=4400,freq=12.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.22087781 = fieldWeight in 4400, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4400)
      0.06666667 = coord(1/15)
    
    Abstract
    Ontologies are a core element of the knowledge management architecture described in Chapter 1. In this chapter we describe a methodology for application driven ontology development, covering the whole project lifecycle from the kick off phase to the maintenance phase. Existing methodologies and practical ontology development experiences have in common that they start from the identification of the purpose of the ontology and the need for domain knowledge acquisition. They differ in their foci and following steps to be taken. In our approach of the ontology development process, we integrate aspects from existing methodologies and lessons learned from practical experience (as described in the Section 3.7). We put ontology development into a wider organizational context by performing an a priori feasibility study. The feasibility study is based on CommonKADS. We modified certain aspects of CommonKADS for a tight integration of the feasibility study into our methodology.
  7. Staab, S.; Studer, R.; Sure, Y.; Volz, R.: SEAL - a SEmantic portAL with content management functionality (2002) 0.00
    4.6303135E-4 = product of:
      0.00694547 = sum of:
        0.00694547 = weight(_text_:in in 3601) [ClassicSimilarity], result of:
          0.00694547 = score(doc=3601,freq=6.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.1561842 = fieldWeight in 3601, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3601)
      0.06666667 = coord(1/15)
    
    Abstract
    "OntoWeb" is an European Union IST-funded thematic network for "Ontology-based information exchange for knowledge management and electronic commerce". The corresponding OntoWeb portal constitutes a Web-based research information system that is driven by some of the technologies which it reports about. In this paper, we present the core methodology underlying the OntoWeb portal, viz. SEAL (SEmantic portAL). In particular, we describe some of the core challenges that SEAL must meet. Because of the distributed nature of research information, SEAL has been developed as a methodology that integrates heterogeneous information from distributed resources. Because of the complexity of the application domain, SEAL is based an ontologies about research information that greatly contribute to the combined goals of low-effort information integration and user-friendly information presentation. Because of the high quality requirements obliged onto the OntoWeb portal, SEAL has been extended with content management functionality supporting portal editors in their process to rule out undesirable content.
  8. Sure, Y.; Erdmann, M.; Studer, R.: OntoEdit: collaborative engineering of ontologies (2004) 0.00
    4.3655015E-4 = product of:
      0.006548252 = sum of:
        0.006548252 = weight(_text_:in in 4405) [ClassicSimilarity], result of:
          0.006548252 = score(doc=4405,freq=12.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.14725187 = fieldWeight in 4405, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=4405)
      0.06666667 = coord(1/15)
    
    Abstract
    Developing ontologies is central to our vision of Semantic Web-based knowledge management. The methodology described in Chapter 3 guides the development of ontologies for different applications. However, because of the size of ontologies, their complexity, their formal underpinnings and the necessity to come towards a shared understanding within a group of people when defining an ontology, ontology construction is still far from being a well-understood process. Concerning the methodology, OntoEdit focuses on three of the main steps for ontology development (the methodology is described in Chapter 3), viz. the kick off, refinement, and evaluation. We describe the steps supported by OntoEdit and focus on collaborative aspects that occur during each of the step. First, all requirements of the envisaged ontology are collected during the kick off phase. Typically for ontology engineering, ontology engineers and domain experts are joined in a team that works together on a description of the domain and the goal of the ontology, design guidelines, available knowledge sources (e.g. re-usable ontologies and thesauri, etc.), potential users and use cases and applications supported by the ontology. The output of this phase is a semiformal description of the ontology. Second, during the refinement phase, the team extends the semi-formal description in several iterations and formalizes it in an appropriate representation language like RDF(S) or, more advanced, DAML1OIL. The output of this phase is a mature ontology (the 'target ontology'). Third, the target ontology needs to be evaluated according to the requirement specifications. Typically this phase serves as a proof for the usefulness of ontologies (and ontology-based applications) and may involve the engineering team as well as end users of the targeted application. The output of this phase is an evaluated ontology, ready for roll-out into a productive environment. Support for these collaborative development steps within the ontology development methodology is crucial in order to meet the conflicting needs for ease of use and construction of complex ontology structures. We now illustrate OntoEdit's support for each of the supported steps. The examples shown are taken from the Swiss Life case study on skills management (cf. Chapter 12).