Search (79 results, page 1 of 4)

  • × theme_ss:"Wissensrepräsentation"
  • × type_ss:"el"
  • × year_i:[2000 TO 2010}
  1. Haas, M.: Methoden der künstlichen Intelligenz in betriebswirtschaftlichen Anwendungen (2006) 0.03
    0.030713584 = product of:
      0.12285434 = sum of:
        0.12285434 = weight(_text_:da in 4499) [ClassicSimilarity], result of:
          0.12285434 = score(doc=4499,freq=4.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.59977156 = fieldWeight in 4499, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.0625 = fieldNorm(doc=4499)
      0.25 = coord(1/4)
    
    Content
    Diplomarbeit zur Erlangung des Grades eines Diplom-Wirtschaftsinformatikers (FH) der Hochschule Wismar. Vgl.: http://www.wi.hs-wismar.de/~cleve/vorl/projects/da/DA-FS-Haas.pdf.
  2. Tudhope, D.; Hodge, G.: Terminology registries (2007) 0.02
    0.016027568 = product of:
      0.06411027 = sum of:
        0.06411027 = sum of:
          0.0062708696 = weight(_text_:a in 539) [ClassicSimilarity], result of:
            0.0062708696 = score(doc=539,freq=2.0), product of:
              0.049223874 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.04269026 = queryNorm
              0.12739488 = fieldWeight in 539, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.078125 = fieldNorm(doc=539)
          0.057839405 = weight(_text_:22 in 539) [ClassicSimilarity], result of:
            0.057839405 = score(doc=539,freq=2.0), product of:
              0.149494 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04269026 = queryNorm
              0.38690117 = fieldWeight in 539, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=539)
      0.25 = coord(1/4)
    
    Abstract
    A discussion on current initiatives regarding terminology registries.
    Date
    26.12.2011 13:22:07
  3. Teutsch, K.: ¬Die Welt ist doch eine Scheibe : Google-Herausforderer eyePlorer (2009) 0.01
    0.011755096 = product of:
      0.047020383 = sum of:
        0.047020383 = weight(_text_:da in 2678) [ClassicSimilarity], result of:
          0.047020383 = score(doc=2678,freq=6.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.22955224 = fieldWeight in 2678, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2678)
      0.25 = coord(1/4)
    
    Content
    Eine neue visuelle Ordnung Martin Hirsch ist der Enkel des Nobelpreisträgers Werner Heisenberg. Außerdem ist er Hirnforscher und beschäftigt sich seit Jahren mit der Frage: Was tut mein Kopf eigentlich, während ich hirnforsche? Ralf von Grafenstein ist Marketingexperte und spezialisiert auf Dienstleistungen im Internet. Zusammen haben sie also am 1. Dezember 2008 eine Firma in Berlin gegründet, deren Heiliger Gral besagte Scheibe ist, auf der - das ist die Idee - bald die ganze Welt, die Internetwelt zumindest, Platz finden soll. Die Scheibe heißt eyePlorer, was sich als Aufforderung an ihre Nutzer versteht. Die sollen auf einer neuartigen, eben scheibenförmigen Plattform die unermesslichen Datensätze des Internets in eine neue visuelle Ordnung bringen. Der Schlüssel dafür, da waren sich Hirsch und von Grafenstein sicher, liegt in der Hirnforschung, denn warum nicht die assoziativen Fähigkeiten des Menschen auf Suchmaschinen übertragen? Anbieter wie Google lassen von solchen Ansätzen bislang die Finger. Hier setzt man dafür auf Volltextprogramme, also sprachbegabte Systeme, die letztlich aber, genau wie die Schlagwortsuche, nur zu opak gerankten Linksammlungen führen. Weiter als auf Seite zwei des Suchergebnisses wagt sich der träge Nutzer meistens nicht vor. Weil sie niemals wahrgenommen wird, fällt eine Menge möglicherweise kostbare Information unter den Tisch.
    Einstein, Weizsäcker und Hitler Zu Demonstrationszwecken wird die eyePlorer-Scheibe an die Wand projiziert. Gibt man im kleinen Suchfeld in der Mitte den Namen Werner Heisenberg ein, verwandelt sich die Scheibe in einen Tortenboden. Die einzelnen Stücke entsprechen Kategorien wie "Person", "Technologie" oder "Organisation". Sie selbst sind mit bunten Knöpfen bedeckt, unter denen sich die Informationen verbergen. So kommt es, dass man beim Thema Heisenberg nicht nur auf die Kollegen Einstein, Weizsäcker und Schrödinger trifft, sondern auch auf Adolf Hitler. Ein Klick auf den entsprechenden Button stellt unter anderem heraus: Heisenberg kam 1933 unter Beschuss der SS, weil er sich nicht vor den Karren einer antisemitischen Physikbewegung spannen ließ. Nach diesem Prinzip spült die frei assoziierende Maschine vollautomatisch immer wieder neue Fakten an, um die der Nutzer zwar nicht gebeten hat, die ihn bei seiner Recherche aber möglicherweise unterstützen und die er später - die Maschine ist noch ausbaubedürftig - auch modellieren darf. Aber will man das, sich von einer Maschine beraten lassen? "Google ist wie ein Zoo", sekundiert Ralf von Grafenstein. "In einem Gehege steht eine Giraffe, im anderen ein Raubtier, aber die sind klar getrennt voneinander durch Gitter und Wege. Es gibt keine Möglichkeit, sie zusammen anzuschauen. Da kommen wir ins Spiel. Wir können Äpfel mit Birnen vergleichen!" Die Welt ist eine Scheibe oder die Scheibe eben eine Welt, auf der vieles mit vielem zusammenhängt und manches auch mit nichts. Der Vorteil dieser Maschine ist, dass sie in Zukunft Sinn stiften könnte, wo andere nur spröde auf Quellen verweisen. "Google ist ja ein unheimlich heterogenes Erlebnis mit ständigen Wartezeiten und Mausklicks dazwischen. Das kostet mich viel zu viel Metagedankenkraft", sagt Hirsch. "Wir wollten eine Maschine mit einer ästhetisch ansprechenden Umgebung bauen, aus der ich mich kaum wegbewege, denn sie liefert mir Informationen in meinen Gedanken hinein."
    Wenn die Maschine denkt Zur Hybris des Projekts passt, dass der eyePlorer ursprünglich HAL heißen sollte - wie der außer Rand und Band geratene Bordcomputer aus Kubricks "2001: Odyssee im Weltraum". Wenn man die Buchstaben aber jeweils um eine Alphabetposition nach rechts verrückt, ergibt sich IBM. Was passiert mit unserem Wissen, wenn die Maschine selbst anfängt zu denken? Ralf von Grafenstein macht ein ernstes Gesicht. "Es ist nicht unser Ansinnen, sie alleinzulassen. Es geht bei uns ja nicht nur darum, zu finden, sondern auch mitzumachen. Die Community ist wichtig. Der Dialog ist beiderseitig." Der Lotse soll in Form einer wachsamen Gemeinschaft also an Bord bleiben. Begünstigt wird diese Annahme auch durch die aufkommenden Anfasstechnologien, mit denen das iPhone derzeit so erfolgreich ist: "Allein zehn Prozent der menschlichen Gehirnleistung gehen auf den Pinzettengriff zurück." Martin Hirsch wundert sich, dass diese Erkenntnis von der IT-Branche erst jetzt berücksichtigt wird. Auf berührungssensiblen Bildschirmen sollen die Nutzer mit wenigen Handgriffen bald spielerisch Inhalte schaffen und dem System zur Verfügung stellen. So wird aus der Suchmaschine ein "Sparringspartner" und aus einem Informationsknopf ein "Knowledge Nugget". Wie auch immer man die Erkenntniszutaten des Internetgroßmarkts serviert: Wissen als Zeitwort ist ein länglicher Prozess. Im Moment sei die Maschine noch auf dem Stand eines Zweijährigen, sagen ihre Schöpfer. Sozialisiert werden soll sie demnächst im Internet, ihre Erziehung erfolgt dann durch die Nutzer. Als er Martin Hirsch mit seiner Scheibe zum ersten Mal gesehen habe, dachte Ralf von Grafenstein: "Das ist überfällig! Das wird kommen! Das muss raus!" Jetzt ist es da, klein, unschuldig und unscheinbar. Man findet es bei Google."
  4. Beppler, F.D.; Fonseca, F.T.; Pacheco, R.C.S.: Hermeneus: an architecture for an ontology-enabled information retrieval (2008) 0.01
    0.010557172 = product of:
      0.042228688 = sum of:
        0.042228688 = sum of:
          0.0075250445 = weight(_text_:a in 3261) [ClassicSimilarity], result of:
            0.0075250445 = score(doc=3261,freq=8.0), product of:
              0.049223874 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.04269026 = queryNorm
              0.15287387 = fieldWeight in 3261, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=3261)
          0.034703642 = weight(_text_:22 in 3261) [ClassicSimilarity], result of:
            0.034703642 = score(doc=3261,freq=2.0), product of:
              0.149494 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04269026 = queryNorm
              0.23214069 = fieldWeight in 3261, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3261)
      0.25 = coord(1/4)
    
    Abstract
    Ontologies improve IR systems regarding its retrieval and presentation of information, which make the task of finding information more effective, efficient, and interactive. In this paper we argue that ontologies also greatly improve the engineering of such systems. We created a framework that uses ontology to drive the process of engineering an IR system. We developed a prototype that shows how a domain specialist without knowledge in the IR field can build an IR system with interactive components. The resulting system provides support for users not only to find their information needs but also to extend their state of knowledge. This way, our approach to ontology-enabled information retrieval addresses both the engineering aspect described here and also the usability aspect described elsewhere.
    Date
    28.11.2016 12:43:22
    Type
    a
  5. Definition of the CIDOC Conceptual Reference Model (2003) 0.01
    0.010006163 = product of:
      0.040024653 = sum of:
        0.040024653 = sum of:
          0.0053210096 = weight(_text_:a in 1652) [ClassicSimilarity], result of:
            0.0053210096 = score(doc=1652,freq=4.0), product of:
              0.049223874 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.04269026 = queryNorm
              0.10809815 = fieldWeight in 1652, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=1652)
          0.034703642 = weight(_text_:22 in 1652) [ClassicSimilarity], result of:
            0.034703642 = score(doc=1652,freq=2.0), product of:
              0.149494 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04269026 = queryNorm
              0.23214069 = fieldWeight in 1652, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1652)
      0.25 = coord(1/4)
    
    Abstract
    This document is the formal definition of the CIDOC Conceptual Reference Model ("CRM"), a formal ontology intended to facilitate the integration, mediation and interchange of heterogeneous cultural heritage information. The CRM is the culmination of more than a decade of standards development work by the International Committee for Documentation (CIDOC) of the International Council of Museums (ICOM). Work on the CRM itself began in 1996 under the auspices of the ICOM-CIDOC Documentation Standards Working Group. Since 2000, development of the CRM has been officially delegated by ICOM-CIDOC to the CIDOC CRM Special Interest Group, which collaborates with the ISO working group ISO/TC46/SC4/WG9 to bring the CRM to the form and status of an International Standard.
    Date
    6. 8.2010 14:22:28
  6. Bittner, T.; Donnelly, M.; Winter, S.: Ontology and semantic interoperability (2006) 0.01
    0.009616541 = product of:
      0.038466163 = sum of:
        0.038466163 = sum of:
          0.0037625222 = weight(_text_:a in 4820) [ClassicSimilarity], result of:
            0.0037625222 = score(doc=4820,freq=2.0), product of:
              0.049223874 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.04269026 = queryNorm
              0.07643694 = fieldWeight in 4820, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=4820)
          0.034703642 = weight(_text_:22 in 4820) [ClassicSimilarity], result of:
            0.034703642 = score(doc=4820,freq=2.0), product of:
              0.149494 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04269026 = queryNorm
              0.23214069 = fieldWeight in 4820, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4820)
      0.25 = coord(1/4)
    
    Date
    3.12.2016 18:39:22
    Type
    a
  7. OWL Web Ontology Language Test Cases (2004) 0.01
    0.0057839407 = product of:
      0.023135763 = sum of:
        0.023135763 = product of:
          0.046271525 = sum of:
            0.046271525 = weight(_text_:22 in 4685) [ClassicSimilarity], result of:
              0.046271525 = score(doc=4685,freq=2.0), product of:
                0.149494 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04269026 = queryNorm
                0.30952093 = fieldWeight in 4685, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4685)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    14. 8.2011 13:33:22
  8. Knorz, G.; Rein, B.: Semantische Suche in einer Hochschulontologie : Ontologie-basiertes Information-Filtering und -Retrieval mit relationalen Datenbanken (2005) 0.01
    0.005060948 = product of:
      0.020243792 = sum of:
        0.020243792 = product of:
          0.040487584 = sum of:
            0.040487584 = weight(_text_:22 in 4324) [ClassicSimilarity], result of:
              0.040487584 = score(doc=4324,freq=2.0), product of:
                0.149494 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04269026 = queryNorm
                0.2708308 = fieldWeight in 4324, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4324)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    11. 2.2011 18:22:25
  9. Kottmann, N.; Studer, T.: Improving semantic query answering (2006) 0.00
    0.00177367 = product of:
      0.00709468 = sum of:
        0.00709468 = product of:
          0.01418936 = sum of:
            0.01418936 = weight(_text_:a in 3979) [ClassicSimilarity], result of:
              0.01418936 = score(doc=3979,freq=16.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.28826174 = fieldWeight in 3979, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3979)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The retrieval problem is one of the main reasoning tasks for knowledge base systems. Given a knowledge base K and a concept C, the retrieval problem consists of finding all individuals a for which K logically entails C(a). We present an approach to answer retrieval queries over (a restriction of) OWL ontologies. Our solution is based on reducing the retrieval problem to a problem of evaluating an SQL query over a database constructed from the original knowledge base. We provide complete answers to retrieval problems. Still, our system performs very well as is shown by a standard benchmark.
  10. Broughton, V.: Facet analysis as a fundamental theory for structuring subject organization tools (2007) 0.00
    0.0014022093 = product of:
      0.005608837 = sum of:
        0.005608837 = product of:
          0.011217674 = sum of:
            0.011217674 = weight(_text_:a in 537) [ClassicSimilarity], result of:
              0.011217674 = score(doc=537,freq=10.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.22789092 = fieldWeight in 537, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=537)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The presentation will examine the potential of facet analysis as a basis for determining status and relationships of concepts in subject based tools using a controlled vocabulary, and the extent to which it can be used as a general theory of knowledge organization as opposed to a methodology for structuring classifications only.
  11. Prieto-Díaz, R.: ¬A faceted approach to building ontologies (2002) 0.00
    0.0013302524 = product of:
      0.0053210096 = sum of:
        0.0053210096 = product of:
          0.010642019 = sum of:
            0.010642019 = weight(_text_:a in 2259) [ClassicSimilarity], result of:
              0.010642019 = score(doc=2259,freq=16.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.2161963 = fieldWeight in 2259, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2259)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    An ontology is "an explicit conceptualization of a domain of discourse, and thus provides a shared and common understanding of the domain." We have been producing ontologies for millennia to understand and explain our rationale and environment. From Plato's philosophical framework to modern day classification systems, ontologies are, in most cases, the product of extensive analysis and categorization. Only recently has the process of building ontologies become a research topic of interest. Today, ontologies are built very much ad-hoc. A terminology is first developed providing a controlled vocabulary for the subject area or domain of interest, then it is organized into a taxonomy where key concepts are identified, and finally these concepts are defined and related to create an ontology. The intent of this paper is to show that domain analysis methods can be used for building ontologies. Domain analysis aims at generic models that represent groups of similar systems within an application domain. In this sense, it deals with categorization of common objects and operations, with clear, unambiguous definitions of them and with defining their relationships.
    Type
    a
  12. Assem, M. van; Malaisé, V.; Miles, A.; Schreiber, G.: ¬A method to convert thesauri to SKOS (2006) 0.00
    0.0012443373 = product of:
      0.004977349 = sum of:
        0.004977349 = product of:
          0.009954698 = sum of:
            0.009954698 = weight(_text_:a in 4642) [ClassicSimilarity], result of:
              0.009954698 = score(doc=4642,freq=14.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.20223314 = fieldWeight in 4642, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4642)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Thesauri can be useful resources for indexing and retrieval on the Semantic Web, but often they are not published in RDF/OWL. To convert thesauri to RDF for use in Semantic Web applications and to ensure the quality and utility of the conversion a structured method is required. Moreover, if different thesauri are to be interoperable without complicated mappings, a standard schema for thesauri is required. This paper presents a method for conversion of thesauri to the SKOS RDF/OWL schema, which is a proposal for such a standard under development by W3Cs Semantic Web Best Practices Working Group. We apply the method to three thesauri: IPSV, GTAA and MeSH. With these case studies we evaluate our method and the applicability of SKOS for representing thesauri.
  13. Tzitzikas, Y.; Spyratos, N.; Constantopoulos, P.; Analyti, A.: Extended faceted ontologies (2002) 0.00
    0.0012443373 = product of:
      0.004977349 = sum of:
        0.004977349 = product of:
          0.009954698 = sum of:
            0.009954698 = weight(_text_:a in 2280) [ClassicSimilarity], result of:
              0.009954698 = score(doc=2280,freq=14.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.20223314 = fieldWeight in 2280, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2280)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    A faceted ontology consists of a set of facets, where each facet consists of a predefined set of terms structured by a subsumption relation. We propose two extensions of faceted ontologies, which allow inferring conjunctions of terms that are valid in the underlying domain. We give a model-theoretic interpretation to these extended faceted ontologies and we provide mechanisms for inferring the valid conjunctions of terms. This inference service can be exploited for preventing errors during the indexing process and for deriving navigation trees that are suitable for browsing. The proposed scheme has several advantages by comparison to the hierarchical classification schemes that are currently used, namely: conceptual clarity: it is easier to understand, compactness: it takes less space, and scalability: the update operations can be formulated easier and be performed more efficiently.
    Type
    a
  14. Riva, P.; Doerr, M.; Zumer, M.: FRBRoo: enabling a common view of information from memory institutions (2008) 0.00
    0.0012393895 = product of:
      0.004957558 = sum of:
        0.004957558 = product of:
          0.009915116 = sum of:
            0.009915116 = weight(_text_:a in 3743) [ClassicSimilarity], result of:
              0.009915116 = score(doc=3743,freq=20.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.20142901 = fieldWeight in 3743, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3743)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    In 2008 the FRBR/CRM Harmonisation Working Group has achieved a major milestone: a complete version of the object-oriented definition of FRBR (FRBRoo) was released for comment. After a brief overview of the history and context of the Working Group, this paper focuses on the primary contributions resulting from this work. - FRBRoo is a self-contained document which expresses the concepts of FRBR using the objectoriented methodology and framework of CIDOC CRM. It is an alternative view on library conceptualisation for a different purpose, not a replacement for FRBR. - This 'translation' process presented an opportunity to verify and confirm FRBR's internal consistency. - FRBRoo offers a common view of library and museum documentation as two kinds of information from memory institutions. Such a common view is necessary to provide interoperable information systems for all users interested in accessing common or related content. - The analysis provided an opportunity for mutual enrichment of FRBR and CIDOC CRM. Examples include: - - Addition of the modelling of time and events to FRBR, which can be seen in its application to the publishing process - - Clarification of the manifestation entity - - Explicit modelling of performances and recordings in FRBR - - Adding the work entity to CRM - - Adding the identifier assignment process to CRM. - Producing a formalisation which is more suited for implementation with object-oriented tools, and which facilitates the testing and adoption of FRBR concepts in implementations with different functional specifications and in different environments.
  15. Fischer, D.H.: Converting a thesaurus to OWL : Notes on the paper "The National Cancer Institute's Thesaurus and Ontology" (2004) 0.00
    0.0012269331 = product of:
      0.0049077324 = sum of:
        0.0049077324 = product of:
          0.009815465 = sum of:
            0.009815465 = weight(_text_:a in 2362) [ClassicSimilarity], result of:
              0.009815465 = score(doc=2362,freq=40.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.19940455 = fieldWeight in 2362, product of:
                  6.3245554 = tf(freq=40.0), with freq of:
                    40.0 = termFreq=40.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2362)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The paper analysed here is a kind of position paper. In order to get a better under-standing of the reported work I used the retrieval interface of the thesaurus, the so-called NCI DTS Browser accessible via the Web3, and I perused the cited OWL file4 with numerous "Find" and "Find next" string searches. In addition the file was im-ported into Protégé 2000, Release 2.0, with OWL Plugin 1.0 and Racer Plugin 1.7.14. At the end of the paper's introduction the authors say: "In the following sections, this paper will describe the terminology development process at NCI, and the issues associated with converting a description logic based nomenclature to a semantically rich OWL ontology." While I will not deal with the first part, i.e. the terminology development process at NCI, I do not see the thesaurus as a description logic based nomenclature, or its cur-rent state and conversion already result in a "rich" OWL ontology. What does "rich" mean here? According to my view there is a great quantity of concepts and links but a very poor description logic structure which enables inferences. And what does the fol-lowing really mean, which is said a few lines previously: "Although editors have defined a number of named ontologic relations to support the description-logic based structure of the Thesaurus, additional relation-ships are considered for inclusion as required to support dependent applications."
    According to my findings several relations available in the thesaurus query interface as "roles", are not used, i.e. there are not yet any assertions with them. And those which are used do not contribute to complete concept definitions of concepts which represent thesaurus main entries. In other words: The authors claim to already have a "description logic based nomenclature", where there is not yet one which deserves that title by being much more than a thesaurus with strict subsumption and additional inheritable semantic links. In the last section of the paper the authors say: "The most time consuming process in this conversion was making a careful analysis of the Thesaurus to understand the best way to translate it into OWL." "For other conversions, these same types of distinctions and decisions must be made. The expressive power of a proprietary encoding can vary widely from that in OWL or RDF. Understanding the original semantics and engineering a solution that most closely duplicates it is critical for creating a useful and accu-rate ontology." My question is: What decisions were made and are they exemplary, can they be rec-ommended as "the best way"? I raise strong doubts with respect to that, and I miss more profound discussions of the issues at stake. The following notes are dedicated to a critical description and assessment of the results of that conversion activity. They are written in a tutorial style more or less addressing students, but myself being a learner especially in the field of medical knowledge representation I do not speak "ex cathedra".
  16. Miles, A.; Matthews, B.; Beckett, D.; Brickley, D.; Wilson, M.; Rogers, N.: SKOS: A language to describe simple knowledge structures for the web (2005) 0.00
    0.0012269331 = product of:
      0.0049077324 = sum of:
        0.0049077324 = product of:
          0.009815465 = sum of:
            0.009815465 = weight(_text_:a in 517) [ClassicSimilarity], result of:
              0.009815465 = score(doc=517,freq=40.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.19940455 = fieldWeight in 517, product of:
                  6.3245554 = tf(freq=40.0), with freq of:
                    40.0 = termFreq=40.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=517)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Content
    "Textual content-based search engines for the web have a number of limitations. Firstly, many web resources have little or no textual content (images, audio or video streams etc.) Secondly, precision is low where natural language terms have overloaded meaning (e.g. 'bank', 'watch', 'chip' etc.) Thirdly, recall is incomplete where the search does not take account of synonyms or quasi-synonyms. Fourthly, there is no basis for assisting a user in modifying (expanding, refining, translating) a search based on the meaning of the original search. Fifthly, there is no basis for searching across natural languages, or framing search queries in terms of symbolic languages. The Semantic Web is a framework for creating, managing, publishing and searching semantically rich metadata for web resources. Annotating web resources with precise and meaningful statements about conceptual aspects of their content provides a basis for overcoming all of the limitations of textual content-based search engines listed above. Creating this type of metadata requires that metadata generators are able to refer to shared repositories of meaning: 'vocabularies' of concepts that are common to a community, and describe the domain of interest for that community.
    This type of effort is common in the digital library community, where a group of experts will interact with a user community to create a thesaurus for a specific domain (e.g. the Art & Architecture Thesaurus AAT AAT) or an overarching classification scheme (e.g. the Dewey Decimal Classification). A similar type of activity is being undertaken more recently in a less centralised manner by web communities, producing for example the DMOZ web directory DMOZ, or the Topic Exchange for weblog topics Topic Exchange. The web, including the semantic web, provides a medium within which communities can interact and collaboratively build and use vocabularies of concepts. A simple language is required that allows these communities to express the structure and content of their vocabularies in a machine-understandable way, enabling exchange and reuse. The Resource Description Framework (RDF) is an ideal language for making statements about web resources and publishing metadata. However, RDF provides only the low level semantics required to form metadata statements. RDF vocabularies must be built on top of RDF to support the expression of more specific types of information within metadata. Ontology languages such as OWL OWL add a layer of expressive power to RDF, and provide powerful tools for defining complex conceptual structures, which can be used to generate rich metadata. However, the class-oriented, logically precise modelling required to construct useful web ontologies is demanding in terms of expertise, effort, and therefore cost. In many cases this type of modelling may be superfluous or unsuited to requirements. Therefore there is a need for a language for expressing vocabularies of concepts for use in semantically rich metadata, that is powerful enough to support semantically enhanced search, but simple enough to be undemanding in terms of the cost and expertise required to use it."
  17. Quick Guide to Publishing a Thesaurus on the Semantic Web (2008) 0.00
    0.0012269331 = product of:
      0.0049077324 = sum of:
        0.0049077324 = product of:
          0.009815465 = sum of:
            0.009815465 = weight(_text_:a in 4656) [ClassicSimilarity], result of:
              0.009815465 = score(doc=4656,freq=10.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.19940455 = fieldWeight in 4656, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4656)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This document describes in brief how to express the content and structure of a thesaurus, and metadata about a thesaurus, in RDF. Using RDF allows data to be linked to and/or merged with other RDF data by semantic web applications. The Semantic Web, which is based on the Resource Description Framework (RDF), provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries.
    Editor
    Miles, A.
  18. Koenderink, N.J.J.P.; Assem, M. van; Hulzebos, J.L.; Broekstra, J.; Top, J.L.: ROC: a method for proto-ontology construction by domain experts (2008) 0.00
    0.0011757881 = product of:
      0.0047031525 = sum of:
        0.0047031525 = product of:
          0.009406305 = sum of:
            0.009406305 = weight(_text_:a in 4647) [ClassicSimilarity], result of:
              0.009406305 = score(doc=4647,freq=18.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.19109234 = fieldWeight in 4647, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4647)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Ontology construction is a labour-intensive and costly process. Even though many formal and semi-formal vocabularies are available, creating an ontology for a specific application is hindered in a number of ways. Firstly, the process of elicitating concepts is a time consuming and strenuous process. Secondly, it is difficult to keep focus. Thirdly, technical modelling constructs are hard to understand for the uninitiated. We propose ROC as a method to cope with these problems. ROC builds on well-known approaches for ontology construction. However, we reuse existing sources to generate a repository of proposed associations. ROC assists in efficiently putting forward all relevant concepts and relations by providing a large set of potential candidate associations. Secondly, rather than using intermediate representations of formal constructs we confront the domain expert with 'natural-language-like' statements generated from RDF-based triples. Moreover, we strictly separate the roles of problem owner, domain expert and knowledge engineer, each having his own responsibilities and skills. The domain expert and problem owner keep focus by monitoring a well-defined application purpose. We have implemented an initial set of tools to support ROC. This paper describes the ROC method and two application cases in which we evaluate the overall approach.
  19. Suchanek, F.M.; Kasneci, G.; Weikum, G.: YAGO: a core of semantic knowledge unifying WordNet and Wikipedia (2007) 0.00
    0.0011520324 = product of:
      0.0046081296 = sum of:
        0.0046081296 = product of:
          0.009216259 = sum of:
            0.009216259 = weight(_text_:a in 3403) [ClassicSimilarity], result of:
              0.009216259 = score(doc=3403,freq=12.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.18723148 = fieldWeight in 3403, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3403)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    We present YAGO, a light-weight and extensible ontology with high coverage and quality. YAGO builds on entities and relations and currently contains more than 1 million entities and 5 million facts. This includes the Is-A hierarchy as well as non-taxonomic relations between entities (such as hasWonPrize). The facts have been automatically extracted from Wikipedia and unified with WordNet, using a carefully designed combination of rule-based and heuristic methods described in this paper. The resulting knowledge base is a major step beyond WordNet: in quality by adding knowledge about individuals like persons, organizations, products, etc. with their semantic relationships - and in quantity by increasing the number of facts by more than an order of magnitude. Our empirical evaluation of fact correctness shows an accuracy of about 95%. YAGO is based on a logically clean model, which is decidable, extensible, and compatible with RDFS. Finally, we show how YAGO can be further extended by state-of-the-art information extraction techniques.
  20. Assem, M. van; Gangemi, A.; Schreiber, G.: Conversion of WordNet to a standard RDF/OWL representation (2006) 0.00
    0.0011520324 = product of:
      0.0046081296 = sum of:
        0.0046081296 = product of:
          0.009216259 = sum of:
            0.009216259 = weight(_text_:a in 4641) [ClassicSimilarity], result of:
              0.009216259 = score(doc=4641,freq=12.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.18723148 = fieldWeight in 4641, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4641)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This paper presents an overview of the work in progress at the W3C to produce a standard conversion of WordNet to the RDF/OWL representation language in use in the SemanticWeb community. Such a standard representation is useful to provide application developers a high-quality resource and to promote interoperability. Important requirements in this conversion process are that it should be complete and should stay close to WordNet's conceptual model. The paper explains the steps taken to produce the conversion and details design decisions such as the composition of the class hierarchy and properties, the addition of suitable OWL semantics and the chosen format of the URIs. Additional topics include a strategy to incorporate OWL and RDFS semantics in one schema such that both RDF(S) infrastructure and OWL infrastructure can interpret the information correctly, problems encountered in understanding the Prolog source files and the description of the two versions that are provided (Basic and Full) to accommodate different usages of WordNet.

Authors

Languages

  • e 70
  • d 6
  • el 1
  • More… Less…

Types

  • a 20
  • n 11
  • x 2
  • r 1
  • s 1
  • More… Less…