Search (132 results, page 1 of 7)

  • × type_ss:"a"
  • × type_ss:"el"
  • × year_i:[2010 TO 2020}
  1. Daudaravicius, V.: ¬A framework for keyphrase extraction from scientific journals (2016) 0.07
    0.074950635 = product of:
      0.14990127 = sum of:
        0.067437425 = weight(_text_:wide in 2930) [ClassicSimilarity], result of:
          0.067437425 = score(doc=2930,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.342674 = fieldWeight in 2930, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2930)
        0.036585998 = weight(_text_:web in 2930) [ClassicSimilarity], result of:
          0.036585998 = score(doc=2930,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.25239927 = fieldWeight in 2930, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2930)
        0.04587784 = weight(_text_:computer in 2930) [ClassicSimilarity], result of:
          0.04587784 = score(doc=2930,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.28263903 = fieldWeight in 2930, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2930)
      0.5 = coord(3/6)
    
    Abstract
    We present a framework for keyphrase extraction from scientific journals in diverse research fields. While journal articles are often provided with manually assigned keywords, it is not clear how to automatically extract keywords and measure their significance for a set of journal articles. We compare extracted keyphrases from journals in the fields of astrophysics, mathematics, physics, and computer science. We show that the presented statistics-based framework is able to demonstrate differences among journals, and that the extracted keyphrases can be used to represent journal or conference research topics, dynamics, and specificity.
    Content
    Vortrag, "Semantics, Analytics, Visualisation: Enhancing Scholarly Data Workshop co-located with the 25th International World Wide Web Conference April 11, 2016 - Montreal, Canada", Montreal 2016.
  2. Martínez-González, M.M.; Alvite-Díez, M.L.: Thesauri and Semantic Web : discussion of the evolution of thesauri toward their integration with the Semantic Web (2019) 0.04
    0.042189382 = product of:
      0.12656814 = sum of:
        0.04816959 = weight(_text_:wide in 5997) [ClassicSimilarity], result of:
          0.04816959 = score(doc=5997,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.24476713 = fieldWeight in 5997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5997)
        0.078398556 = weight(_text_:web in 5997) [ClassicSimilarity], result of:
          0.078398556 = score(doc=5997,freq=18.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.5408555 = fieldWeight in 5997, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5997)
      0.33333334 = coord(2/6)
    
    Abstract
    Thesauri are Knowledge Organization Systems (KOS), that arise from the consensus of wide communities. They have been in use for many years and are regularly updated. Whereas in the past thesauri were designed for information professionals for indexing and searching, today there is a demand for conceptual vocabularies that enable inferencing by machines. The development of the Semantic Web has brought a new opportunity for thesauri, but thesauri also face the challenge of proving that they add value to it. The evolution of thesauri toward their integration with the Semantic Web is examined. Elements and structures in the thesaurus standard, ISO 25964, and SKOS (Simple Knowledge Organization System), the Semantic Web standard for representing KOS, are reviewed and compared. Moreover, the integrity rules of thesauri are contrasted with the axioms of SKOS. How SKOS has been applied to represent some real thesauri is taken into account. Three thesauri are chosen for this aim: AGROVOC, EuroVoc and the UNESCO Thesaurus. Based on the results of this comparison and analysis, the benefits that Semantic Web technologies offer to thesauri, how thesauri can contribute to the Semantic Web, and the challenges that would help to improve their integration with the Semantic Web are discussed.
    Theme
    Semantic Web
  3. Klic, L.; Miller, M.; Nelson, J.K.; Germann, J.E.: Approaching the largest 'API' : extracting information from the Internet with Python (2018) 0.04
    0.04017412 = product of:
      0.12052235 = sum of:
        0.057803504 = weight(_text_:wide in 4239) [ClassicSimilarity], result of:
          0.057803504 = score(doc=4239,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.29372054 = fieldWeight in 4239, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=4239)
        0.062718846 = weight(_text_:web in 4239) [ClassicSimilarity], result of:
          0.062718846 = score(doc=4239,freq=8.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.43268442 = fieldWeight in 4239, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4239)
      0.33333334 = coord(2/6)
    
    Abstract
    This article explores the need for libraries to algorithmically access and manipulate the world's largest API: the Internet. The billions of pages on the 'Internet API' (HTTP, HTML, CSS, XPath, DOM, etc.) are easily accessible and manipulable. Libraries can assist in creating meaning through the datafication of information on the world wide web. Because most information is created for human consumption, some programming is required for automated extraction. Python is an easy-to-learn programming language with extensive packages and community support for web page automation. Four packages (Urllib, Selenium, BeautifulSoup, Scrapy) in Python can automate almost any web page for all sized projects. An example warrant data project is explained to illustrate how well Python packages can manipulate web pages to create meaning through assembling custom datasets.
  4. Menzel, C.: Knowledge representation, the World Wide Web, and the evolution of logic (2011) 0.04
    0.03737321 = product of:
      0.11211963 = sum of:
        0.057803504 = weight(_text_:wide in 761) [ClassicSimilarity], result of:
          0.057803504 = score(doc=761,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.29372054 = fieldWeight in 761, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=761)
        0.054316122 = weight(_text_:web in 761) [ClassicSimilarity], result of:
          0.054316122 = score(doc=761,freq=6.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.37471575 = fieldWeight in 761, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=761)
      0.33333334 = coord(2/6)
    
    Abstract
    In this paper, I have traced a series of evolutionary adaptations of FOL motivated entirely by its use by knowledge engineers to represent and share information on the Web culminating in the development of Common Logic. While the primary goal in this paper has been to document this evolution, it is arguable, I think that CL's syntactic and semantic egalitarianism better realizes the goal "topic neutrality" that a logic should ideally exemplify - understood, at least in part, as the idea that logic should as far as possible not itself embody any metaphysical presuppositions. Instead of retaining the traditional metaphysical divisions of FOL that reflect its Fregean origins, CL begins as it were with a single, metaphysically homogeneous domain in which, potentially, anything can play the traditional roles of object, property, relation, and function. Note that the effect of this is not to destroy traditional metaphysical divisions. Rather, it simply to refrain from building those divisions explicitly into one's logic; instead, such divisions are left to the user to introduce and enforce axiomatically in an explicit metaphysical theory.
    Theme
    Semantic Web
  5. Pohl, A.; Steeg, F.: Zurück ins Web : die Entwicklung eines neuen Webauftritts für die Nordrhein-Westfälische Bibliographie (NWBib) (2016) 0.04
    0.03737321 = product of:
      0.11211963 = sum of:
        0.057803504 = weight(_text_:wide in 3063) [ClassicSimilarity], result of:
          0.057803504 = score(doc=3063,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.29372054 = fieldWeight in 3063, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=3063)
        0.054316122 = weight(_text_:web in 3063) [ClassicSimilarity], result of:
          0.054316122 = score(doc=3063,freq=6.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.37471575 = fieldWeight in 3063, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3063)
      0.33333334 = coord(2/6)
    
    Abstract
    Am Hochschulbibliothekszentrum des Landes Nordrhein-Westfalen (hbz) wird seit Anfang 2014 nach Vorgaben und unter Begutachtung der Universitäts- und Landesbibliotheken in Düsseldorf, Münster und Bonn ein neuer Webauftritt für die Landesbibliographie Nordrhein-Westfalens, die Nordrhein-Westfälische Bibliographie (NWBib) entwickelt. Die Entwicklung basiert auf der Web-Schnittstelle des Linked-Open-Data-Dienst lobid und wird vollständig mit Open-Source-Software entwickelt. Aus der Perspektive des Entwicklungsteams am hbz beschreibt der Artikel Kontext und Durchführung des Projekts. Der Beitrag skizziert die historische Entwicklung der NWBib mit Fokus auf die Beziehung der Bibliographie zum World Wide Web (WWW), erläutert die Voraussetzungen für die Neuentwicklung sowie die Leitlinien des Entwicklungsprozesses, gibt einen Überblick über die Nutzung des neuen Webauftritts und die zur Umsetzung verwendete Technologie. Abgeschlossen wir der Artikel mit Lessons-Learned und einem Ausblick auf weitere Entwicklungen.
  6. Mäkelä, E.; Hyvönen, E.; Ruotsalo, T.: How to deal with massively heterogeneous cultural heritage data : lessons learned in CultureSampo (2012) 0.04
    0.036481895 = product of:
      0.109445676 = sum of:
        0.07012181 = weight(_text_:web in 3263) [ClassicSimilarity], result of:
          0.07012181 = score(doc=3263,freq=10.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.48375595 = fieldWeight in 3263, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3263)
        0.039323866 = weight(_text_:computer in 3263) [ClassicSimilarity], result of:
          0.039323866 = score(doc=3263,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.24226204 = fieldWeight in 3263, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=3263)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper presents the CultureSampo system for publishing heterogeneous linked data as a service. Discussed are the problems of converting legacy data into linked data, as well as the challenge of making the massively heterogeneous yet interlinked cultural heritage content interoperable on a semantic level. Novel user interface concepts for then utilizing the content are also presented. In the approach described, the data is published not only for human use, but also as intelligent services for other computer systems that can then provide interfaces of their own for the linked data. As a concrete use case of using CultureSampo as a service, the BookSampo system for publishing Finnish fiction literature on the semantic web is presented.
    Content
    Beitrag eines Schwerpunktthemas: Semantic Web and Reasoning for Cultural Heritage and Digital Libraries: http://www.semantic-web-journal.net/content/how-deal-massively-heterogeneous-cultural-heritage-data-%E2%80%93-lessons-learned-culturesampo http://www.semantic-web-journal.net/sites/default/files/swj160_0.pdf.
    Source
    Semantic Web journal. 3(2012) no.1, S.85-109
  7. Leskinen, P.; Hyvönen, E.: Extracting genealogical networks of linked data from biographical texts (2019) 0.04
    0.036415547 = product of:
      0.10924664 = sum of:
        0.063368805 = weight(_text_:web in 5798) [ClassicSimilarity], result of:
          0.063368805 = score(doc=5798,freq=6.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.43716836 = fieldWeight in 5798, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5798)
        0.04587784 = weight(_text_:computer in 5798) [ClassicSimilarity], result of:
          0.04587784 = score(doc=5798,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.28263903 = fieldWeight in 5798, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5798)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper presents the idea and our work of extracting and reassembling a genealogical network automatically from a collection of biographies. The network can be used as a tool for network analysis of historical persons. The data has been published as Linked Data and as an interactive online service as part of the in-use data service and semantic portal BiographySampo - Finnish Biographies on the Semantic Web.
    Series
    Lecture notes in computer science; vol.11762
    Source
    ¬The Semantic Web: ESWC 2019 Satellite Events. ESWC 2019. Ed.: P. Hitzler et al
    Theme
    Semantic Web
  8. Gábor, K.; Zargayouna, H.; Tellier, I.; Buscaldi, D.; Charnois, T.: ¬A typology of semantic relations dedicated to scientific literature analysis (2016) 0.03
    0.034674477 = product of:
      0.10402343 = sum of:
        0.067437425 = weight(_text_:wide in 2933) [ClassicSimilarity], result of:
          0.067437425 = score(doc=2933,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.342674 = fieldWeight in 2933, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2933)
        0.036585998 = weight(_text_:web in 2933) [ClassicSimilarity], result of:
          0.036585998 = score(doc=2933,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.25239927 = fieldWeight in 2933, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2933)
      0.33333334 = coord(2/6)
    
    Content
    Vortrag, "Semantics, Analytics, Visualisation: Enhancing Scholarly Data Workshop co-located with the 25th International World Wide Web Conference April 11, 2016 - Montreal, Canada", Montreal 2016.
  9. Halpin, H.; Hayes, P.J.; McCusker, J.P.; McGuinness, D.L.; Thompson, H.S.: When owl:sameAs isn't the same : an analysis of identity in linked data (2010) 0.03
    0.03401424 = product of:
      0.10204271 = sum of:
        0.062718846 = weight(_text_:web in 4703) [ClassicSimilarity], result of:
          0.062718846 = score(doc=4703,freq=8.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.43268442 = fieldWeight in 4703, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4703)
        0.039323866 = weight(_text_:computer in 4703) [ClassicSimilarity], result of:
          0.039323866 = score(doc=4703,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.24226204 = fieldWeight in 4703, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=4703)
      0.33333334 = coord(2/6)
    
    Abstract
    In Linked Data, the use of owl:sameAs is ubiquitous in interlinking data-sets. There is however, ongoing discussion about its use, and potential misuse, particularly with regards to interactions with inference. In fact, owl:sameAs can be viewed as encoding only one point on a scale of similarity, one that is often too strong for many of its current uses. We describe how referentially opaque contexts that do not allow inference exist, and then outline some varieties of referentially-opaque alternatives to owl:sameAs. Finally, we report on an empirical experiment over randomly selected owl:sameAs statements from the Web of data. This theoretical apparatus and experiment shed light upon how owl:sameAs is being used (and misused) on the Web of data.
    Series
    Lecture notes in computer science; 6496
    Source
    The Semantic Web - ISWC 2010. 9th International Semantic Web Conference, ISWC 2010, Shanghai, China, November 7-11, 2010, Revised Selected Papers, Part I. Eds.: Peter F. Patel-Schneider et al
  10. Mühlbauer, P.: Upload in Computer klappt . (2018) 0.03
    0.033508338 = product of:
      0.10052501 = sum of:
        0.07946275 = weight(_text_:computer in 4113) [ClassicSimilarity], result of:
          0.07946275 = score(doc=4113,freq=6.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.48954517 = fieldWeight in 4113, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4113)
        0.021062255 = product of:
          0.04212451 = sum of:
            0.04212451 = weight(_text_:22 in 4113) [ClassicSimilarity], result of:
              0.04212451 = score(doc=4113,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.2708308 = fieldWeight in 4113, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4113)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Den drei an der Technischen Universität in Wien forschenden Informatikern Mathias Lechner, Radu Grosu und Ramin Hasani ist es gelungen, das Nervensystem des Fadenwurm Caenorhabditis elegans (C. elegans) als Software in einen Computer zu übertragen und nachzuweisen, dass der "hochgeladene" virtuelle Wurm auf Reize genau so reagiert wie ein echter Nematodenwurm auf echte Reize in der Realität. Dafür ließen sie ihn eine Aufgabe bewältigen, die Hasani zufolge dem Balancieren eines Stabes ähnelt.
    Date
    12. 2.2018 15:22:19
    Source
    https://www.heise.de/tp/features/Upload-in-Computer-klappt-3962785.html?view=print
  11. Lange, C.: Ontologies and languages for representing mathematical knowledge on the Semantic Web (2011) 0.03
    0.031851403 = product of:
      0.09555421 = sum of:
        0.0693383 = weight(_text_:web in 135) [ClassicSimilarity], result of:
          0.0693383 = score(doc=135,freq=22.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.47835067 = fieldWeight in 135, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=135)
        0.02621591 = weight(_text_:computer in 135) [ClassicSimilarity], result of:
          0.02621591 = score(doc=135,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.16150802 = fieldWeight in 135, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.03125 = fieldNorm(doc=135)
      0.33333334 = coord(2/6)
    
    Abstract
    Mathematics is a ubiquitous foundation of science, technology, and engineering. Specific areas, such as numeric and symbolic computation or logics, enjoy considerable software support. Working mathematicians have recently started to adopt Web 2.0 environment, such as blogs and wikis, but these systems lack machine support for knowledge organization and reuse, and they are disconnected from tools such as computer algebra systems or interactive proof assistants.We argue that such scenarios will benefit from Semantic Web technology. Conversely, mathematics is still underrepresented on the Web of [Linked] Data. There are mathematics-related Linked Data, for example statistical government data or scientific publication databases, but their mathematical semantics has not yet been modeled. We argue that the services for the Web of Data will benefit from a deeper representation of mathematical knowledge. Mathematical knowledge comprises logical and functional structures - formulæ, statements, and theories -, a mixture of rigorous natural language and symbolic notation in documents, application-specific metadata, and discussions about conceptualizations, formalizations, proofs, and (counter-)examples. Our review of approaches to representing these structures covers ontologies for mathematical problems, proofs, interlinked scientific publications, scientific discourse, as well as mathematical metadata vocabularies and domain knowledge from pure and applied mathematics. Many fields of mathematics have not yet been implemented as proper Semantic Web ontologies; however, we show that MathML and OpenMath, the standard XML-based exchange languages for mathematical knowledge, can be fully integrated with RDF representations in order to contribute existing mathematical knowledge to theWeb of Data. We conclude with a roadmap for getting the mathematical Web of Data started: what datasets to publish, how to interlink them, and how to take advantage of these new connections.
    Content
    Vgl.: http://www.semantic-web-journal.net/content/ontologies-and-languages-representing-mathematical-knowledge-semantic-web http://www.semantic-web-journal.net/sites/default/files/swj122_2.pdf.
    Source
    Semantic Web journal. 2(2012), no.x
  12. Hyvönen, E.; Leskinen, P.; Tamper, M.; Keravuori, K.; Rantala, H.; Ikkala, E.; Tuominen, J.: BiographySampo - publishing and enriching biographies on the Semantic Web for digital humanities research (2019) 0.03
    0.030401578 = product of:
      0.09120473 = sum of:
        0.05843484 = weight(_text_:web in 5799) [ClassicSimilarity], result of:
          0.05843484 = score(doc=5799,freq=10.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.40312994 = fieldWeight in 5799, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5799)
        0.03276989 = weight(_text_:computer in 5799) [ClassicSimilarity], result of:
          0.03276989 = score(doc=5799,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.20188503 = fieldWeight in 5799, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5799)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper argues for making a paradigm shift in publishing and using biographical dictionaries on the web, based on Linked Data. The idea is to provide the user with enhanced reading experience of biographies by enriching contents with data linking and reasoning. In addition, versatile tooling for 1) biographical research of individual persons as well as for 2) prosopographical research on groups of people are provided. To demonstrate and evaluate the new possibilities,we present the semantic portal "BiographySampo - Finnish Biographies on theSemantic Web". The system is based on a knowledge graph extracted automatically from a collection of 13.100 textual biographies, enriched with data linking to 16 external data sources, and by harvesting external collection data from libraries, museums, and archives. The portal was released in September 2018 for free public use at: http://biografiasampo.fi.
    Series
    Lecture notes in computer science; vol.11503
    Source
    ¬The Semantic Web. ESWC 2019 Satellite Events, ESWC 2019 Satellite Events, Portoroz, Slovenia, June 2-6, 2019, Revised Selected Papers
    Theme
    Semantic Web
  13. Belpassi, E.: ¬The application software RIMMF : RDA thinking in action (2016) 0.03
    0.029720977 = product of:
      0.08916293 = sum of:
        0.057803504 = weight(_text_:wide in 2959) [ClassicSimilarity], result of:
          0.057803504 = score(doc=2959,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.29372054 = fieldWeight in 2959, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=2959)
        0.031359423 = weight(_text_:web in 2959) [ClassicSimilarity], result of:
          0.031359423 = score(doc=2959,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.21634221 = fieldWeight in 2959, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2959)
      0.33333334 = coord(2/6)
    
    Abstract
    RIMMF software is grew out of the need to visualize and realize records according to the RDA guidelines. The article describes the software structure and features in the creation of a r­ball, that is a small database populated by recordings of bibliographic and authority resources enriched by relationships between and among entities involved. At first it's introduced the need that led to RIMMF outcome, then starts the software functional analysis. With a description of the main steps of the r-ball building, emphasizing the issues raised. The results highlights some critical aspects, but above all the wide scope of possible developments that open the Cultural Heritage Institutions horizon to the web prospective. Conclusions display the RDF-linked­data development of the RIMMF incoming future.
  14. Perovsek, M.; Kranjca, J.; Erjaveca, T.; Cestnika, B.; Lavraca, N.: TextFlows : a visual programming platform for text mining and natural language processing (2016) 0.03
    0.027890932 = product of:
      0.08367279 = sum of:
        0.04434892 = weight(_text_:web in 2697) [ClassicSimilarity], result of:
          0.04434892 = score(doc=2697,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.3059541 = fieldWeight in 2697, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2697)
        0.039323866 = weight(_text_:computer in 2697) [ClassicSimilarity], result of:
          0.039323866 = score(doc=2697,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.24226204 = fieldWeight in 2697, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=2697)
      0.33333334 = coord(2/6)
    
    Abstract
    Text mining and natural language processing are fast growing areas of research, with numerous applications in business, science and creative industries. This paper presents TextFlows, a web-based text mining and natural language processing platform supporting workflow construction, sharing and execution. The platform enables visual construction of text mining workflows through a web browser, and the execution of the constructed workflows on a processing cloud. This makes TextFlows an adaptable infrastructure for the construction and sharing of text processing workflows, which can be reused in various applications. The paper presents the implemented text mining and language processing modules, and describes some precomposed workflows. Their features are demonstrated on three use cases: comparison of document classifiers and of different part-of-speech taggers on a text categorization problem, and outlier detection in document corpora.
    Source
    Science of computer programming. In Press, 2016
  15. Bold, N.; Kim, W.-J.; Yang, J.-D.: Converting object-based thesauri into XML Topic Maps (2010) 0.02
    0.023561096 = product of:
      0.070683286 = sum of:
        0.031359423 = weight(_text_:web in 4799) [ClassicSimilarity], result of:
          0.031359423 = score(doc=4799,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.21634221 = fieldWeight in 4799, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4799)
        0.039323866 = weight(_text_:computer in 4799) [ClassicSimilarity], result of:
          0.039323866 = score(doc=4799,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.24226204 = fieldWeight in 4799, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=4799)
      0.33333334 = coord(2/6)
    
    Abstract
    Constructing ontology is considerably time consuming process in general. Since there are a vast amount of thesauri currently available, it may be a feasible solution to exploit thesauri, when constructing ontology in a short period of time. This paper designs and implements a XTM (XML Topic Maps) code converter generating XTM coded ontology from an object based thesaurus. It is an extended thesaurus, which enriches the conventional thesauri with user defined associations, a notion of instances and occurrences associated with them. The reason we adopt XTM is that it is a verified and practical methodology to semantically reorganize the conceptual structure of extant web applications with minimal effort. Moreover, since XTM is conceptually similar to our object based thesauri, recommendation and inference mechanism already developed in our system could be easily applied to the generated XTM ontology. To show that the XTM ontology is correct, we also verify it with onto pia Omnigator and Vizigator, the components of Ontopia Knowledge Suite (OKS) tool.
    Source
    2010 2nd International Conference on Education Technology and Computer (ICETC)
  16. Kempf, A.O.; Ritze, D.; Eckert, K.; Zapilko, B.: New ways of mapping knowledge organization systems : using a semi­automatic matching­procedure for building up vocabulary crosswalks (2013) 0.02
    0.023561096 = product of:
      0.070683286 = sum of:
        0.031359423 = weight(_text_:web in 989) [ClassicSimilarity], result of:
          0.031359423 = score(doc=989,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.21634221 = fieldWeight in 989, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=989)
        0.039323866 = weight(_text_:computer in 989) [ClassicSimilarity], result of:
          0.039323866 = score(doc=989,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.24226204 = fieldWeight in 989, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=989)
      0.33333334 = coord(2/6)
    
    Abstract
    Crosswalks between different vocabularies are an indispensable prerequisite for integrated and high quality search scenarios in distributed data environments. Offered through the web and linked with each other they act as a central link so that users can move back and forth between different data sources available online. In the past, crosswalks between different thesauri have primarily been developed manually. In the long run the intellectual updating of such crosswalks requires huge personnel expenses. Therefore, an integration of automatic matching procedures, as for example Ontology Matching Tools, seems an obvious need. On the basis of computer generated correspondences between the Thesaurus for Economics (STW) and the Thesaurus for the Social Sciences (TheSoz) our contribution will explore cross-border approaches between IT-assisted tools and procedures on the one hand and external quality measurements via domain experts on the other hand. The techniques that emerge enable semi-automatically performed vocabulary crosswalks.
  17. Kara, S.: ¬An ontology-based retrieval system using semantic indexing (2012) 0.02
    0.023561096 = product of:
      0.070683286 = sum of:
        0.031359423 = weight(_text_:web in 3829) [ClassicSimilarity], result of:
          0.031359423 = score(doc=3829,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.21634221 = fieldWeight in 3829, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3829)
        0.039323866 = weight(_text_:computer in 3829) [ClassicSimilarity], result of:
          0.039323866 = score(doc=3829,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.24226204 = fieldWeight in 3829, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=3829)
      0.33333334 = coord(2/6)
    
    Content
    Thesis submitted to the Graduate School of Natural and Applied Sciences of Middle East Technical University in partial fulfilment of the requirements for the degree of Master of science in Computer Engineering (XII, 57 S.)
    Theme
    Semantic Web
  18. Assem, M. van; Rijgersberg, H.; Wigham, M.; Top, J.: Converting and annotating quantitative data tables (2010) 0.02
    0.023242442 = product of:
      0.069727324 = sum of:
        0.036957435 = weight(_text_:web in 4705) [ClassicSimilarity], result of:
          0.036957435 = score(doc=4705,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.25496176 = fieldWeight in 4705, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4705)
        0.03276989 = weight(_text_:computer in 4705) [ClassicSimilarity], result of:
          0.03276989 = score(doc=4705,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.20188503 = fieldWeight in 4705, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4705)
      0.33333334 = coord(2/6)
    
    Series
    Lecture notes in computer science; 6496
    Source
    The Semantic Web - ISWC 2010. 9th International Semantic Web Conference, ISWC 2010, Shanghai, China, November 7-11, 2010, Revised Selected Papers, Part I. Eds.: Peter F. Patel-Schneider et al
  19. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.02
    0.020102633 = product of:
      0.060307898 = sum of:
        0.045263432 = weight(_text_:web in 4553) [ClassicSimilarity], result of:
          0.045263432 = score(doc=4553,freq=6.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.3122631 = fieldWeight in 4553, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4553)
        0.0150444675 = product of:
          0.030088935 = sum of:
            0.030088935 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
              0.030088935 = score(doc=4553,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.19345059 = fieldWeight in 4553, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4553)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Semantic Web knowledge representation standards, and in particular RDF and OWL, often come endowed with a formal semantics which is considered to be of fundamental importance for the field. Reasoning, i.e., the drawing of logical inferences from knowledge expressed in such standards, is traditionally based on logical deductive methods and algorithms which can be proven to be sound and complete and terminating, i.e. correct in a very strong sense. For various reasons, though, in particular the scalability issues arising from the ever increasing amounts of Semantic Web data available and the inability of deductive algorithms to deal with noise in the data, it has been argued that alternative means of reasoning should be investigated which bear high promise for high scalability and better robustness. From this perspective, deductive algorithms can be considered the gold standard regarding correctness against which alternative methods need to be tested. In this paper, we show that it is possible to train a Deep Learning system on RDF knowledge graphs, such that it is able to perform reasoning over new RDF knowledge graphs, with high precision and recall compared to the deductive gold standard.
    Date
    16.11.2018 14:22:01
    Theme
    Semantic Web
  20. Aslam, S.; Sonkar, S.K.: Semantic Web : an overview (2019) 0.02
    0.019710634 = product of:
      0.118263796 = sum of:
        0.118263796 = weight(_text_:web in 54) [ClassicSimilarity], result of:
          0.118263796 = score(doc=54,freq=16.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.8158776 = fieldWeight in 54, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=54)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper presents the semantic web, web writing content, web technology, goals of semantic and obligation for the expansion of web 3.0. This paper also shows the different components of semantic web and such as HTTP, HTML, XML, XML Schema, URI, RDF, Taxonomy and OWL. To provide valuable information services semantic web execute the benefits of library functions and also to be the best use of library collection are mention here.
    Theme
    Semantic Web

Languages

  • e 77
  • d 50
  • f 2
  • i 2
  • a 1
  • More… Less…