Search (343 results, page 1 of 18)

  • × language_ss:"e"
  • × type_ss:"el"
  1. Becker, C.; Bizer, C.: DBpedia Mobile : a location-aware Semantic Web client 0.09
    0.0933096 = product of:
      0.1866192 = sum of:
        0.051727813 = weight(_text_:data in 2478) [ClassicSimilarity], result of:
          0.051727813 = score(doc=2478,freq=12.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.4278775 = fieldWeight in 2478, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2478)
        0.13489139 = weight(_text_:becker in 2478) [ClassicSimilarity], result of:
          0.13489139 = score(doc=2478,freq=4.0), product of:
            0.25693014 = queryWeight, product of:
              6.7201533 = idf(docFreq=144, maxDocs=44218)
              0.03823278 = queryNorm
            0.52501196 = fieldWeight in 2478, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.7201533 = idf(docFreq=144, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2478)
      0.5 = coord(2/4)
    
    Abstract
    DBpedia Mobile is a location-aware client for the Semantic Web that can be used on an iPhone and other mobile devices. Based on the current GPS position of a mobile device, DBpedia Mobile renders a map indicating nearby locations from the DBpedia dataset. Starting from this map, the user can explore background information about his surroundings by navigating along data links into otherWeb data sources. DBpedia Mobile has been designed for the use case of a tourist exploring a city. As the application is not restricted to a fixed set of data sources but can retrieve and display data from arbitrary Web data sources, DBpedia Mobile can also be employed within other use cases, including ones unforeseen by its developers. Besides accessing Web data, DBpedia Mobile also enables users to publish their current location, pictures and reviews to the Semantic Web so that they can be used by other Semantic Web applications. Instead of simply being tagged with geographical coordinates, published content is interlinked with a nearby DBpedia resource and thus contributes to the overall richness of the Geospatial Semantic Web.
    Source
    http://www.wiwiss.fu-berlin.de/en/institute/pwo/bizer/research/publications/Becker-Bizer-DBpediaMobile-Submission.pdf
  2. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.05
    0.05147969 = product of:
      0.10295938 = sum of:
        0.060723793 = product of:
          0.30361897 = sum of:
            0.30361897 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.30361897 = score(doc=1826,freq=2.0), product of:
                0.32413796 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03823278 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.2 = coord(1/5)
        0.042235587 = weight(_text_:data in 1826) [ClassicSimilarity], result of:
          0.042235587 = score(doc=1826,freq=2.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.34936053 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.5 = coord(2/4)
    
    Content
    Präsentation anlässlich: European Conference on Data Analysis (ECDA 2014) in Bremen, Germany, July 2nd to 4th 2014, LIS-Workshop.
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  3. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.04
    0.03632982 = product of:
      0.07265964 = sum of:
        0.0506827 = weight(_text_:data in 1967) [ClassicSimilarity], result of:
          0.0506827 = score(doc=1967,freq=8.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.4192326 = fieldWeight in 1967, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=1967)
        0.021976938 = product of:
          0.043953877 = sum of:
            0.043953877 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
              0.043953877 = score(doc=1967,freq=4.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.32829654 = fieldWeight in 1967, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1967)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
  4. Mein erstes Lexikon : das multimediale Lexikon für Kinder ab 4 Jahren (1996) 0.04
    0.035199914 = product of:
      0.14079966 = sum of:
        0.14079966 = product of:
          0.2815993 = sum of:
            0.2815993 = weight(_text_:lexikon in 5485) [ClassicSimilarity], result of:
              0.2815993 = score(doc=5485,freq=4.0), product of:
                0.23962554 = queryWeight, product of:
                  6.2675414 = idf(docFreq=227, maxDocs=44218)
                  0.03823278 = queryNorm
                1.175164 = fieldWeight in 5485, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.2675414 = idf(docFreq=227, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5485)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
  5. Heflin, J.; Hendler, J.: Semantic interoperability on the Web (2000) 0.03
    0.029970573 = product of:
      0.059941147 = sum of:
        0.041811097 = weight(_text_:data in 759) [ClassicSimilarity], result of:
          0.041811097 = score(doc=759,freq=4.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.34584928 = fieldWeight in 759, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=759)
        0.01813005 = product of:
          0.0362601 = sum of:
            0.0362601 = weight(_text_:22 in 759) [ClassicSimilarity], result of:
              0.0362601 = score(doc=759,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.2708308 = fieldWeight in 759, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=759)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    XML will have a profound impact on the way data is exchanged on the Internet. An important feature of this language is the separation of content from presentation, which makes it easier to select and/or reformat the data. However, due to the likelihood of numerous industry and domain specific DTDs, those who wish to integrate information will still be faced with the problem of semantic interoperability. In this paper we discuss why this problem is not solved by XML, and then discuss why the Resource Description Framework is only a partial solution. We then present the SHOE language, which we feel has many of the features necessary to enable a semantic web, and describe an existing set of tools that make it easy to use the language.
    Date
    11. 5.2013 19:22:18
  6. Bertelsmann Lexikon Geschichte (1996) 0.03
    0.029333262 = product of:
      0.11733305 = sum of:
        0.11733305 = product of:
          0.2346661 = sum of:
            0.2346661 = weight(_text_:lexikon in 5984) [ClassicSimilarity], result of:
              0.2346661 = score(doc=5984,freq=4.0), product of:
                0.23962554 = queryWeight, product of:
                  6.2675414 = idf(docFreq=227, maxDocs=44218)
                  0.03823278 = queryNorm
                0.97930336 = fieldWeight in 5984, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.2675414 = idf(docFreq=227, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5984)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Ein zuverlässiges, aktuelles Lexikon zur Geschichte, unentbehrlich für das Verständnis historischer Zusammenhänge und politischer Gegenwartsprobleme
  7. Faro, S.; Francesconi, E.; Marinai, E.; Sandrucci, V.: Report on execution and results of the interoperability tests (2008) 0.03
    0.027254261 = product of:
      0.054508522 = sum of:
        0.03378847 = weight(_text_:data in 7411) [ClassicSimilarity], result of:
          0.03378847 = score(doc=7411,freq=2.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.2794884 = fieldWeight in 7411, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=7411)
        0.020720055 = product of:
          0.04144011 = sum of:
            0.04144011 = weight(_text_:22 in 7411) [ClassicSimilarity], result of:
              0.04144011 = score(doc=7411,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.30952093 = fieldWeight in 7411, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7411)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    - Formal characterization given to the thesaurus mapping problem - Interopearbility workflow - - Thesauri SKOS Core transformation - - Thesaurus Mapping algorithms implementation - The "gold standard" data set and the THALEN application - Thesaurus interoperability assessment measures - Experimental results
    Date
    7.11.2008 10:40:22
  8. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.03
    0.025689062 = product of:
      0.051378123 = sum of:
        0.035838082 = weight(_text_:data in 4649) [ClassicSimilarity], result of:
          0.035838082 = score(doc=4649,freq=4.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.29644224 = fieldWeight in 4649, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=4649)
        0.015540041 = product of:
          0.031080082 = sum of:
            0.031080082 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
              0.031080082 = score(doc=4649,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.23214069 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    More and more cultural heritage institutions publish their collections, vocabularies and metadata on the Web. The resulting Web of linked cultural data opens up exciting new possibilities for searching and browsing through these cultural heritage collections. We report on ongoing work in which we investigate the estimation of relevance in this Web of Culture. We study existing measures of semantic distance and how they apply to two use cases. The use cases relate to the structured, multilingual and multimodal nature of the Culture Web. We distinguish between measures using the Web, such as Google distance and PMI, and measures using the Linked Data Web, i.e. the semantic structure of metadata vocabularies. We perform a small study in which we compare these semantic distance measures to human judgements of relevance. Although it is too early to draw any definitive conclusions, the study provides new insights into the applicability of semantic distance measures to the Web of Culture, and clear starting points for further research.
    Date
    26.12.2011 13:40:22
  9. Delsey, T.: ¬The Making of RDA (2016) 0.03
    0.025689062 = product of:
      0.051378123 = sum of:
        0.035838082 = weight(_text_:data in 2946) [ClassicSimilarity], result of:
          0.035838082 = score(doc=2946,freq=4.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.29644224 = fieldWeight in 2946, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2946)
        0.015540041 = product of:
          0.031080082 = sum of:
            0.031080082 = weight(_text_:22 in 2946) [ClassicSimilarity], result of:
              0.031080082 = score(doc=2946,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.23214069 = fieldWeight in 2946, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2946)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The author revisits the development of RDA from its inception in 2005 through to its initial release in 2010. The development effort is set in the context of an evolving digital environment that was transforming both the production and dissemination of information resources and the technologies used to create, store, and access data describing those resources. The author examines the interplay between strategic commitments to align RDA with new conceptual models, emerging database structures, and metadata developments in allied communities, on the one hand, and compatibility with AACR2 legacy databases on the other. Aspects of the development effort examined include the structuring of RDA as a resource description language, organizing the new standard as a working tool, and refining guidelines and instructions for recording RDA data.
    Date
    17. 5.2016 19:22:40
  10. Si, L.: Encoding formats and consideration of requirements for mapping (2007) 0.02
    0.02384748 = product of:
      0.04769496 = sum of:
        0.02956491 = weight(_text_:data in 540) [ClassicSimilarity], result of:
          0.02956491 = score(doc=540,freq=2.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.24455236 = fieldWeight in 540, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=540)
        0.01813005 = product of:
          0.0362601 = sum of:
            0.0362601 = weight(_text_:22 in 540) [ClassicSimilarity], result of:
              0.0362601 = score(doc=540,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.2708308 = fieldWeight in 540, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=540)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    With the increasing requirement of establishing semantic mappings between different vocabularies, further development of these encoding formats is becoming more and more important. For this reason, four types of knowledge representation formats were assessed:MARC21 for Classification Data in XML, Zthes XML Schema, XTM(XML Topic Map), and SKOS (Simple Knowledge Organisation System). This paper explores the potential of adapting these representation formats to support different semantic mapping methods, and discusses the implication of extending them to represent more complex KOS.
    Date
    26.12.2011 13:22:27
  11. Stephens, O.: Introduction to OpenRefine (2014) 0.02
    0.023704663 = product of:
      0.09481865 = sum of:
        0.09481865 = weight(_text_:data in 2884) [ClassicSimilarity], result of:
          0.09481865 = score(doc=2884,freq=28.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.7843124 = fieldWeight in 2884, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2884)
      0.25 = coord(1/4)
    
    Abstract
    OpenRefine is described as a tool for working with 'messy' data - but what does this mean? It is probably easiest to describe the kinds of data OpenRefine is good at working with and the sorts of problems it can help you solve. OpenRefine is most useful where you have data in a simple tabular format but with internal inconsistencies either in data formats, or where data appears, or in terminology used. It can help you: Get an overview of a data set Resolve inconsistencies in a data set Help you split data up into more granular parts Match local data up to other data sets Enhance a data set with data from other sources Some common scenarios might be: 1. Where you want to know how many times a particular value appears in a column in your data. 2. Where you want to know how values are distributed across your whole data set. 3. Where you have a list of dates which are formatted in different ways, and want to change all the dates in the list to a single common date format.
  12. Baker, T.; Bermès, E.; Coyle, K.; Dunsire, G.; Isaac, A.; Murray, P.; Panzer, M.; Schneider, J.; Singer, R.; Summers, E.; Waites, W.; Young, J.; Zeng, M.: Library Linked Data Incubator Group Final Report (2011) 0.02
    0.022348972 = product of:
      0.08939589 = sum of:
        0.08939589 = weight(_text_:data in 4796) [ClassicSimilarity], result of:
          0.08939589 = score(doc=4796,freq=56.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.7394569 = fieldWeight in 4796, product of:
              7.483315 = tf(freq=56.0), with freq of:
                56.0 = termFreq=56.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=4796)
      0.25 = coord(1/4)
    
    Abstract
    The mission of the W3C Library Linked Data Incubator Group, chartered from May 2010 through August 2011, has been "to help increase global interoperability of library data on the Web, by bringing together people involved in Semantic Web activities - focusing on Linked Data - in the library community and beyond, building on existing initiatives, and identifying collaboration tracks for the future." In Linked Data [LINKEDDATA], data is expressed using standards such as Resource Description Framework (RDF) [RDF], which specifies relationships between things, and Uniform Resource Identifiers (URIs, or "Web addresses") [URI]. This final report of the Incubator Group examines how Semantic Web standards and Linked Data principles can be used to make the valuable information assets that library create and curate - resources such as bibliographic data, authorities, and concept schemes - more visible and re-usable outside of their original library context on the wider Web. The Incubator Group began by eliciting reports on relevant activities from parties ranging from small, independent projects to national library initiatives (see the separate report, Library Linked Data Incubator Group: Use Cases) [USECASE]. These use cases provided the starting point for the work summarized in the report: an analysis of the benefits of library Linked Data, a discussion of current issues with regard to traditional library data, existing library Linked Data initiatives, and legal rights over library data; and recommendations for next steps. The report also summarizes the results of a survey of current Linked Data technologies and an inventory of library Linked Data resources available today (see also the more detailed report, Library Linked Data Incubator Group: Datasets, Value Vocabularies, and Metadata Element Sets) [VOCABDATASET].
    Key recommendations of the report are: - That library leaders identify sets of data as possible candidates for early exposure as Linked Data and foster a discussion about Open Data and rights; - That library standards bodies increase library participation in Semantic Web standardization, develop library data standards that are compatible with Linked Data, and disseminate best-practice design patterns tailored to library Linked Data; - That data and systems designers design enhanced user services based on Linked Data capabilities, create URIs for the items in library datasets, develop policies for managing RDF vocabularies and their URIs, and express library data by re-using or mapping to existing Linked Data vocabularies; - That librarians and archivists preserve Linked Data element sets and value vocabularies and apply library experience in curation and long-term preservation to Linked Data datasets.
  13. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.02
    0.021407552 = product of:
      0.042815104 = sum of:
        0.02986507 = weight(_text_:data in 4553) [ClassicSimilarity], result of:
          0.02986507 = score(doc=4553,freq=4.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.24703519 = fieldWeight in 4553, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4553)
        0.012950035 = product of:
          0.02590007 = sum of:
            0.02590007 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
              0.02590007 = score(doc=4553,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.19345059 = fieldWeight in 4553, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4553)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Semantic Web knowledge representation standards, and in particular RDF and OWL, often come endowed with a formal semantics which is considered to be of fundamental importance for the field. Reasoning, i.e., the drawing of logical inferences from knowledge expressed in such standards, is traditionally based on logical deductive methods and algorithms which can be proven to be sound and complete and terminating, i.e. correct in a very strong sense. For various reasons, though, in particular the scalability issues arising from the ever increasing amounts of Semantic Web data available and the inability of deductive algorithms to deal with noise in the data, it has been argued that alternative means of reasoning should be investigated which bear high promise for high scalability and better robustness. From this perspective, deductive algorithms can be considered the gold standard regarding correctness against which alternative methods need to be tested. In this paper, we show that it is possible to train a Deep Learning system on RDF knowledge graphs, such that it is able to perform reasoning over new RDF knowledge graphs, with high precision and recall compared to the deductive gold standard.
    Date
    16.11.2018 14:22:01
  14. Bizer, C.; Cyganiak, R.; Heath, T.: How to publish Linked Data on the Web (2007) 0.02
    0.020905549 = product of:
      0.083622195 = sum of:
        0.083622195 = weight(_text_:data in 3791) [ClassicSimilarity], result of:
          0.083622195 = score(doc=3791,freq=16.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.69169855 = fieldWeight in 3791, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3791)
      0.25 = coord(1/4)
    
    Abstract
    This document provides a tutorial on how to publish Linked Data on the Web. After a general overview of the concept of Linked Data, we describe several practical recipes for publishing information as Linked Data on the Web.
    Content
    This tutorial has been superseeded by the book Linked Data: Evolving the Web into a Global Data Space written by Tom Heath and Christian Bizer. This tutorial was published in 2007 and is still online for historical reasons. The Linked Data book was published in 2011 and provides a more detailed and up-to-date introduction into Linked Data.
  15. Bertelsmann Lexikon Tiere (1994) 0.02
    0.020741748 = product of:
      0.08296699 = sum of:
        0.08296699 = product of:
          0.16593398 = sum of:
            0.16593398 = weight(_text_:lexikon in 5382) [ClassicSimilarity], result of:
              0.16593398 = score(doc=5382,freq=2.0), product of:
                0.23962554 = queryWeight, product of:
                  6.2675414 = idf(docFreq=227, maxDocs=44218)
                  0.03823278 = queryNorm
                0.692472 = fieldWeight in 5382, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.2675414 = idf(docFreq=227, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5382)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
  16. Lexikon des internationalen Films : die ganze Welt des Films auf CD-ROM (1997) 0.02
    0.020741748 = product of:
      0.08296699 = sum of:
        0.08296699 = product of:
          0.16593398 = sum of:
            0.16593398 = weight(_text_:lexikon in 1203) [ClassicSimilarity], result of:
              0.16593398 = score(doc=1203,freq=2.0), product of:
                0.23962554 = queryWeight, product of:
                  6.2675414 = idf(docFreq=227, maxDocs=44218)
                  0.03823278 = queryNorm
                0.692472 = fieldWeight in 1203, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.2675414 = idf(docFreq=227, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1203)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
  17. Priss, U.: Description logic and faceted knowledge representation (1999) 0.02
    0.020440696 = product of:
      0.04088139 = sum of:
        0.02534135 = weight(_text_:data in 2655) [ClassicSimilarity], result of:
          0.02534135 = score(doc=2655,freq=2.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.2096163 = fieldWeight in 2655, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2655)
        0.015540041 = product of:
          0.031080082 = sum of:
            0.031080082 = weight(_text_:22 in 2655) [ClassicSimilarity], result of:
              0.031080082 = score(doc=2655,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.23214069 = fieldWeight in 2655, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2655)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The term "facet" was introduced into the field of library classification systems by Ranganathan in the 1930's [Ranganathan, 1962]. A facet is a viewpoint or aspect. In contrast to traditional classification systems, faceted systems are modular in that a domain is analyzed in terms of baseline facets which are then synthesized. In this paper, the term "facet" is used in a broader meaning. Facets can describe different aspects on the same level of abstraction or the same aspect on different levels of abstraction. The notion of facets is related to database views, multicontexts and conceptual scaling in formal concept analysis [Ganter and Wille, 1999], polymorphism in object-oriented design, aspect-oriented programming, views and contexts in description logic and semantic networks. This paper presents a definition of facets in terms of faceted knowledge representation that incorporates the traditional narrower notion of facets and potentially facilitates translation between different knowledge representation formalisms. A goal of this approach is a modular, machine-aided knowledge base design mechanism. A possible application is faceted thesaurus construction for information retrieval and data mining. Reasoning complexity depends on the size of the modules (facets). A more general analysis of complexity will be left for future research.
    Date
    22. 1.2016 17:30:31
  18. Decimal Classification Editorial Policy Committee (2002) 0.02
    0.019715954 = product of:
      0.039431907 = sum of:
        0.021117793 = weight(_text_:data in 236) [ClassicSimilarity], result of:
          0.021117793 = score(doc=236,freq=2.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.17468026 = fieldWeight in 236, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=236)
        0.018314114 = product of:
          0.036628228 = sum of:
            0.036628228 = weight(_text_:22 in 236) [ClassicSimilarity], result of:
              0.036628228 = score(doc=236,freq=4.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.27358043 = fieldWeight in 236, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=236)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The Decimal Classification Editorial Policy Committee (EPC) held its Meeting 117 at the Library Dec. 3-5, 2001, with chair Andrea Stamm (Northwestern University) presiding. Through its actions at this meeting, significant progress was made toward publication of DDC unabridged Edition 22 in mid-2003 and Abridged Edition 14 in early 2004. For Edition 22, the committee approved the revisions to two major segments of the classification: Table 2 through 55 Iran (the first half of the geographic area table) and 900 History and geography. EPC approved updates to several parts of the classification it had already considered: 004-006 Data processing, Computer science; 340 Law; 370 Education; 510 Mathematics; 610 Medicine; Table 3 issues concerning treatment of scientific and technical themes, with folklore, arts, and printing ramifications at 398.2 - 398.3, 704.94, and 758; Table 5 and Table 6 Ethnic Groups and Languages (portions concerning American native peoples and languages); and tourism issues at 647.9 and 790. Reports on the results of testing the approved 200 Religion and 305-306 Social groups schedules were received, as was a progress report on revision work for the manual being done by Ross Trotter (British Library, retired). Revisions for Abridged Edition 14 that received committee approval included 010 Bibliography; 070 Journalism; 150 Psychology; 370 Education; 380 Commerce, communications, and transportation; 621 Applied physics; 624 Civil engineering; and 629.8 Automatic control engineering. At the meeting the committee received print versions of _DC&_ numbers 4 and 5. Primarily for the use of Dewey translators, these cumulations list changes, substantive and cosmetic, to DDC Edition 21 and Abridged Edition 13 for the period October 1999 - December 2001. EPC will hold its Meeting 118 at the Library May 15-17, 2002.
  19. Wright, H.: Semantic Web and ontologies (2018) 0.02
    0.01955535 = product of:
      0.0782214 = sum of:
        0.0782214 = weight(_text_:data in 80) [ClassicSimilarity], result of:
          0.0782214 = score(doc=80,freq=14.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.64702475 = fieldWeight in 80, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=80)
      0.25 = coord(1/4)
    
    Abstract
    The Semantic Web and ontologies can help archaeologists combine and share data, making it more open and useful. Archaeologists create diverse types of data, using a wide variety of technologies and methodologies. Like all research domains, these data are increasingly digital. The creation of data that are now openly and persistently available from disparate sources has also inspired efforts to bring archaeological resources together and make them more interoperable. This allows functionality such as federated cross-search across different datasets, and the mapping of heterogeneous data to authoritative structures to build a single data source. Ontologies provide the structure and relationships for Semantic Web data, and have been developed for use in cultural heritage applications generally, and archaeology specifically. A variety of online resources for archaeology now incorporate Semantic Web principles and technologies.
  20. Rusch-Feja, D.; Becker, H.J.: Global Info : the German digital libraries project (1999) 0.02
    0.019076524 = product of:
      0.0763061 = sum of:
        0.0763061 = weight(_text_:becker in 1242) [ClassicSimilarity], result of:
          0.0763061 = score(doc=1242,freq=2.0), product of:
            0.25693014 = queryWeight, product of:
              6.7201533 = idf(docFreq=144, maxDocs=44218)
              0.03823278 = queryNorm
            0.29699162 = fieldWeight in 1242, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7201533 = idf(docFreq=144, maxDocs=44218)
              0.03125 = fieldNorm(doc=1242)
      0.25 = coord(1/4)
    

Years

Types

  • a 173
  • s 12
  • r 7
  • i 6
  • n 6
  • x 3
  • p 2
  • m 1
  • More… Less…

Classifications