Search (123 results, page 1 of 7)

  • × language_ss:"e"
  • × type_ss:"el"
  • × year_i:[2010 TO 2020}
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.40
    0.3955221 = product of:
      0.7910442 = sum of:
        0.07573764 = product of:
          0.22721292 = sum of:
            0.22721292 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.22721292 = score(doc=1826,freq=2.0), product of:
                0.24256827 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.028611459 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
        0.22721292 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.22721292 = score(doc=1826,freq=2.0), product of:
            0.24256827 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.028611459 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
        0.22721292 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.22721292 = score(doc=1826,freq=2.0), product of:
            0.24256827 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.028611459 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
        0.033667825 = weight(_text_:web in 1826) [ClassicSimilarity], result of:
          0.033667825 = score(doc=1826,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.36057037 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
        0.22721292 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.22721292 = score(doc=1826,freq=2.0), product of:
            0.24256827 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.028611459 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.5 = coord(5/10)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.02
    0.0183509 = product of:
      0.091754496 = sum of:
        0.06060208 = weight(_text_:web in 4649) [ClassicSimilarity], result of:
          0.06060208 = score(doc=4649,freq=18.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.64902663 = fieldWeight in 4649, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4649)
        0.031152412 = product of:
          0.04672862 = sum of:
            0.023469873 = weight(_text_:29 in 4649) [ClassicSimilarity], result of:
              0.023469873 = score(doc=4649,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.23319192 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
            0.023258746 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
              0.023258746 = score(doc=4649,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.23214069 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
          0.6666667 = coord(2/3)
      0.2 = coord(2/10)
    
    Abstract
    More and more cultural heritage institutions publish their collections, vocabularies and metadata on the Web. The resulting Web of linked cultural data opens up exciting new possibilities for searching and browsing through these cultural heritage collections. We report on ongoing work in which we investigate the estimation of relevance in this Web of Culture. We study existing measures of semantic distance and how they apply to two use cases. The use cases relate to the structured, multilingual and multimodal nature of the Culture Web. We distinguish between measures using the Web, such as Google distance and PMI, and measures using the Linked Data Web, i.e. the semantic structure of metadata vocabularies. We perform a small study in which we compare these semantic distance measures to human judgements of relevance. Although it is too early to draw any definitive conclusions, the study provides new insights into the applicability of semantic distance measures to the Web of Culture, and clear starting points for further research.
    Date
    29. 7.2011 14:44:56
    26.12.2011 13:40:22
    Theme
    Semantic Web
  3. Aslam, S.; Sonkar, S.K.: Semantic Web : an overview (2019) 0.02
    0.01732253 = product of:
      0.08661264 = sum of:
        0.07618159 = weight(_text_:web in 54) [ClassicSimilarity], result of:
          0.07618159 = score(doc=54,freq=16.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.8158776 = fieldWeight in 54, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=54)
        0.010431055 = product of:
          0.031293165 = sum of:
            0.031293165 = weight(_text_:29 in 54) [ClassicSimilarity], result of:
              0.031293165 = score(doc=54,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.31092256 = fieldWeight in 54, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=54)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    This paper presents the semantic web, web writing content, web technology, goals of semantic and obligation for the expansion of web 3.0. This paper also shows the different components of semantic web and such as HTTP, HTML, XML, XML Schema, URI, RDF, Taxonomy and OWL. To provide valuable information services semantic web execute the benefits of library functions and also to be the best use of library collection are mention here.
    Date
    10.12.2020 9:29:12
    Theme
    Semantic Web
  4. Dietze, S.; Maynard, D.; Demidova, E.; Risse, T.; Stavrakas, Y.: Entity extraction and consolidation for social Web content preservation (2012) 0.01
    0.011404229 = product of:
      0.057021145 = sum of:
        0.050501734 = weight(_text_:web in 470) [ClassicSimilarity], result of:
          0.050501734 = score(doc=470,freq=18.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.5408555 = fieldWeight in 470, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=470)
        0.00651941 = product of:
          0.019558229 = sum of:
            0.019558229 = weight(_text_:29 in 470) [ClassicSimilarity], result of:
              0.019558229 = score(doc=470,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.19432661 = fieldWeight in 470, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=470)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    With the rapidly increasing pace at which Web content is evolving, particularly social media, preserving the Web and its evolution over time becomes an important challenge. Meaningful analysis of Web content lends itself to an entity-centric view to organise Web resources according to the information objects related to them. Therefore, the crucial challenge is to extract, detect and correlate entities from a vast number of heterogeneous Web resources where the nature and quality of the content may vary heavily. While a wealth of information extraction tools aid this process, we believe that, the consolidation of automatically extracted data has to be treated as an equally important step in order to ensure high quality and non-ambiguity of generated data. In this paper we present an approach which is based on an iterative cycle exploiting Web data for (1) targeted archiving/crawling of Web objects, (2) entity extraction, and detection, and (3) entity correlation. The long-term goal is to preserve Web content over time and allow its navigation and analysis based on well-formed structured RDF data about entities.
    Pages
    S.18-29
  5. Clark, J.A.; Young, S.W.H.: Building a better book in the browser : using Semantic Web technologies and HTML5 (2015) 0.01
    0.009644936 = product of:
      0.04822468 = sum of:
        0.040401388 = weight(_text_:web in 2116) [ClassicSimilarity], result of:
          0.040401388 = score(doc=2116,freq=8.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.43268442 = fieldWeight in 2116, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2116)
        0.007823291 = product of:
          0.023469873 = sum of:
            0.023469873 = weight(_text_:29 in 2116) [ClassicSimilarity], result of:
              0.023469873 = score(doc=2116,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.23319192 = fieldWeight in 2116, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2116)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    The library as place and service continues to be shaped by the legacy of the book. The book itself has evolved in recent years, with various technologies vying to become the next dominant book form. In this article, we discuss the design and development of our prototype software from Montana State University (MSU) Library for presenting books inside of web browsers. The article outlines the contextual background and technological potential for publishing traditional book content through the web using open standards. Our prototype demonstrates the application of HTML5, structured data with RDFa and Schema.org markup, linked data components using JSON-LD, and an API-driven data model. We examine how this open web model impacts discovery, reading analytics, eBook production, and machine-readability for libraries considering how to unite software development and publishing.
    Source
    Code4Lib journal. Issue 29(2015), [http://journal.code4lib.org/issues/issues/issue29]
  6. Scientometrics pioneer Eugene Garfield dies : Eugene Garfield, founder of the Institute for Scientific Information and The Scientist, has passed away at age 91 (2017) 0.01
    0.008203137 = product of:
      0.041015685 = sum of:
        0.029231945 = weight(_text_:kommunikation in 3460) [ClassicSimilarity], result of:
          0.029231945 = score(doc=3460,freq=2.0), product of:
            0.14706601 = queryWeight, product of:
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.028611459 = queryNorm
            0.19876751 = fieldWeight in 3460, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3460)
        0.011783739 = weight(_text_:web in 3460) [ClassicSimilarity], result of:
          0.011783739 = score(doc=3460,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.12619963 = fieldWeight in 3460, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3460)
      0.2 = coord(2/10)
    
    Content
    Vgl. auch Open Password, Nr.167 vom 01.03.2017 :"Eugene Garfield, Begründer und Pionier der Zitationsindexierung und der Ziationsanalyse, ohne den die Informationswissenschaft heute anders aussähe, ist im Alter von 91 Jahren gestorben. Er hinterlässt Frau, drei Söhne, eine Tochter, eine Stieftochter, zwei Enkeltöchter und zwei Großelternkinder. Garfield machte seinen ersten Abschluss als Bachelor in Chemie an der Columbia University in New York City im Jahre 1949. 1954 sattelte er einen Abschluss in Bibliothekswissenschaft drauf. 1961 sollte er im Fach strukturelle Linguistik promovieren. Als Chemie-Student war er nach eigenen Angaben weder besonders gut noch besonders glücklich. Sein "Erweckungserlebnis" hatte er auf einer Tagung der American Chemical Society, als er entdeckte, dass sich mit der Suche nach Literatur womöglich ein Lebensunterhalt bestreiten lasse. "So I went to the Chairman of the meeting and said: "How do you get a job in this racket?" Ab 1955 war Garfield zunächst als Berater für pharmazeutische Unternehmen tätig. Dort spezialisierte er sich auf Fachinformationen, indem er Inhalte relevanter Fachzeitschriften erarbeitete. 1955 schlug er in "Science" seine bahnbrechende Idee vor, Zitationen wissenschaftlicher Veröffentlichungen systematisch zu erfassen und Zusammenhänge zwischen Zitaten deutlich zu machen. 1960 gründete Garfield das Institute für Scientific Informationen, dessen CEO er bis 1992 blieb. 1964 brachte er den Scientific Information Index heraus. Weitere Maßgrößen wie der Social Science Index (ab 1973), der Arts and Humanities Citation Index (ab 1978) und der Journal Citation Index folgten. Diese Verzeichnisse wurden in dem "Web of Science" zusammengefasst und als Datenbank elektronisch zugänglich gemacht. Damit wurde es den Forschern ermöglich, die für sie relevante Literatur "at their fingertips" zu finden und sich in ihr zurechtzufinden. Darüber hinaus wurde es mit Hilfe der Rankings von Garfields Messgrößen möglich, die relative wissenschaftliche Bedeutung wissenschaftlicher Beiträge, Autoren, wissenschaftlicher Einrichtungen, Regionen und Länder zu messen.
    Garfield wandte sich im Zusammenhang mit seinen Messgrößen gegen "Bibliographic Negligence" und "Citation Amnesia", Er schrieb 2002: "There will never be a perfect solution to the problem of acknowledging intellectual debts. But a beginning can be made if journal editors will demand a signed pledge from authors that they have searched Medline, Science Citation Index, or other appropriate print and electronic databases." Er warnte aber auch vor einen unsachgemäßen Umgang mit seinen Messgößen und vor übertriebenen Erwartungen an sie in Zusammenhang mit Karriereentscheidungen über Wissenschaftler und Überlebensentscheidungen für wissenschaftliche Einrichtungen. 1982 übernahm die Thomson Corporation ISI für 210 Millionen Dollar. In der heutigen Nachfolgeorganisation Clarivate Analytics sind mehr als 4000 Mitarbeitern in über hundert Ländern beschäftigt. Garfield gründete auch eine Zeitung für Wissenschaftler, speziell für Biowissenschaftler, "The Scientist", die weiterbesteht und als kostenfreier Pushdienst bezogen werden kann. In seinen Beiträgen zur Wissenschaftspolitik kritisierte er beispielsweise die Wissenschaftsberater von Präsident Reagen 1986 als "Advocats of the administration´s science policies, rather than as objective conduits for communication between the president and the science community." Seinen Beitrag, mit dem er darum warb, die Förderung von UNESCO-Forschungsprogrammen fortzusetzen, gab er den Titel: "Let´s stand up für Global Science". Das ist auch in Trump-Zeiten ein guter Titel, da die US-Regierung den Wahrheitsbegriff, auf der Wissenschaft basiert, als bedeutungslos verwirft und sich auf Nationalismus und Abschottung statt auf internationale Kommunikation, Kooperation und gemeinsame Ausschöpfung von Interessen fokussiert."
  7. Assem, M. van: Converting and integrating vocabularies for the Semantic Web (2010) 0.01
    0.007640626 = product of:
      0.038203128 = sum of:
        0.0329876 = weight(_text_:web in 4639) [ClassicSimilarity], result of:
          0.0329876 = score(doc=4639,freq=12.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.35328537 = fieldWeight in 4639, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=4639)
        0.0052155275 = product of:
          0.015646582 = sum of:
            0.015646582 = weight(_text_:29 in 4639) [ClassicSimilarity], result of:
              0.015646582 = score(doc=4639,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.15546128 = fieldWeight in 4639, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4639)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    This thesis focuses on conversion of vocabularies for representation and integration of collections on the Semantic Web. A secondary focus is how to represent metadata schemas (RDF Schemas representing metadata element sets) such that they interoperate with vocabularies. The primary domain in which we operate is that of cultural heritage collections. The background worldview in which a solution is sought is that of the Semantic Web research paradigmwith its associated theories, methods, tools and use cases. In other words, we assume the SemanticWeb is in principle able to provide the context to realize interoperable collections. Interoperability is dependent on the interplay between representations and the applications that use them. We mean applications in the widest sense, such as "search" and "annotation". These applications or tasks are often present in software applications, such as the E-Culture application. It is therefore necessary that applications requirements on the vocabulary representation are met. This leads us to formulate the following problem statement: HOW CAN EXISTING VOCABULARIES BE MADE AVAILABLE TO SEMANTIC WEB APPLICATIONS?
    We refine the problem statement into three research questions. The first two focus on the problem of conversion of a vocabulary to a Semantic Web representation from its original format. Conversion of a vocabulary to a representation in a Semantic Web language is necessary to make the vocabulary available to SemanticWeb applications. In the last question we focus on integration of collection metadata schemas in a way that allows for vocabulary representations as produced by our methods. Academisch proefschrift ter verkrijging van de graad Doctor aan de Vrije Universiteit Amsterdam, Dutch Research School for Information and Knowledge Systems.
    Date
    29. 7.2011 14:44:56
  8. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.01
    0.007123591 = product of:
      0.035617955 = sum of:
        0.029157192 = weight(_text_:web in 4553) [ClassicSimilarity], result of:
          0.029157192 = score(doc=4553,freq=6.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.3122631 = fieldWeight in 4553, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4553)
        0.006460763 = product of:
          0.019382289 = sum of:
            0.019382289 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
              0.019382289 = score(doc=4553,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.19345059 = fieldWeight in 4553, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4553)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    Semantic Web knowledge representation standards, and in particular RDF and OWL, often come endowed with a formal semantics which is considered to be of fundamental importance for the field. Reasoning, i.e., the drawing of logical inferences from knowledge expressed in such standards, is traditionally based on logical deductive methods and algorithms which can be proven to be sound and complete and terminating, i.e. correct in a very strong sense. For various reasons, though, in particular the scalability issues arising from the ever increasing amounts of Semantic Web data available and the inability of deductive algorithms to deal with noise in the data, it has been argued that alternative means of reasoning should be investigated which bear high promise for high scalability and better robustness. From this perspective, deductive algorithms can be considered the gold standard regarding correctness against which alternative methods need to be tested. In this paper, we show that it is possible to train a Deep Learning system on RDF knowledge graphs, such that it is able to perform reasoning over new RDF knowledge graphs, with high precision and recall compared to the deductive gold standard.
    Date
    16.11.2018 14:22:01
    Theme
    Semantic Web
  9. Internet Privacy : eine multidisziplinäre Bestandsaufnahme / a multidisciplinary analysis: acatech STUDIE (2012) 0.01
    0.006590606 = product of:
      0.06590606 = sum of:
        0.06590606 = weight(_text_:schutz in 3383) [ClassicSimilarity], result of:
          0.06590606 = score(doc=3383,freq=2.0), product of:
            0.20656188 = queryWeight, product of:
              7.2195506 = idf(docFreq=87, maxDocs=44218)
              0.028611459 = queryNorm
            0.31906208 = fieldWeight in 3383, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2195506 = idf(docFreq=87, maxDocs=44218)
              0.03125 = fieldNorm(doc=3383)
      0.1 = coord(1/10)
    
    Abstract
    Aufgrund der so großen Bedeutung von Privatheit im Internet hat acatech, die Deutsche Akademie der Technikwissenschaften, 2011 ein Projekt initiiert, das sich mit dem Privatheitsparadoxon wissenschaftlich auseinandersetzt. In dem Projekt werden Empfehlungen entwickelt, wie sich eine Kultur der Privatheit und des Vertrauens im Internet etablieren lässt, die es ermöglicht, das Paradoxon aufzulösen. Wir verwenden hier den Begriff der Privatheit. Er deutet an, dass hier nicht nur der räumliche Begriff Privatsphäre gemeint ist, sondern auch das im europäischen Kontext wichtige Konzept der informationellen Selbstbestimmung einbezogen ist. Dieser Band legt die Ergebnisse der ersten Projektphase vor: eine Bestandsaufnahme von Privatheit im Internet aus verschiedenen Blickwinkeln. Kapitel 1 stellt die Wünsche und Befürchtungen von Internetnutzern und Gesellschaft im Hinblick auf ihre Privatheit vor. Sie wurden mithilfe sozialwissenschaftlicher Methoden untersucht. Ergänzend dazu untersucht das zweite Kapitel Privatheit im Cyberspace aus ethischer Perspektive. Das dritte Kapitel widmet sich ökonomischen Aspekten: Da viele Onlinedienstleistungen mit Nutzerdaten bezahlt werden, ergibt sich die Frage, was dies sowohl für den Nutzer und Kunden als auch für die Unternehmen bedeutet. Kapitel 4 hat einen technologischen Fokus und analysiert, wie Privatheit von Internettechnologien bedroht wird und welche technischen Möglichkeiten es gibt, um die Privatheit des Nutzers zu schützen. Selbstverständlich ist der Schutz von Privatheit im Internet nicht nur ein technisches Problem. Deshalb untersucht Kapitel 5 Privatheit aus rechtlicher Sicht. Bei der Lektüre der fünf Kapitel wird dem Leser sofort die Komplexität der Frage von Privatheit im Internet (Internet Privacy) bewusst. Daraus folgt die unbedingte Notwendigkeit eines interdisziplinären Ansatzes. In diesem Sinne wird die interdisziplinäre Projektgruppe gemeinsam Optionen und Empfehlungen für einen Umgang mit Privatheit im Internet entwickeln, die eine Kultur der Privatheit und des Vertrauens im Internet fördern. Diese Optionen und Empfehlungen werden 2013 als zweiter Band dieser Studie veröffentlicht.
  10. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.01
    0.006232995 = product of:
      0.031164974 = sum of:
        0.020200694 = weight(_text_:web in 1967) [ClassicSimilarity], result of:
          0.020200694 = score(doc=1967,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.21634221 = fieldWeight in 1967, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1967)
        0.01096428 = product of:
          0.032892838 = sum of:
            0.032892838 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
              0.032892838 = score(doc=1967,freq=4.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.32829654 = fieldWeight in 1967, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1967)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
    Source
    Beyond libraries - subject metadata in the digital environment and semantic web. IFLA Satellite Post-Conference, 17-18 August 2012, Tallinn
  11. Assem, M. van; Rijgersberg, H.; Wigham, M.; Top, J.: Converting and annotating quantitative data tables (2010) 0.01
    0.0060652317 = product of:
      0.030326158 = sum of:
        0.023806747 = weight(_text_:web in 4705) [ClassicSimilarity], result of:
          0.023806747 = score(doc=4705,freq=4.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.25496176 = fieldWeight in 4705, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4705)
        0.00651941 = product of:
          0.019558229 = sum of:
            0.019558229 = weight(_text_:29 in 4705) [ClassicSimilarity], result of:
              0.019558229 = score(doc=4705,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.19432661 = fieldWeight in 4705, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4705)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Date
    29. 7.2011 14:44:56
    Source
    The Semantic Web - ISWC 2010. 9th International Semantic Web Conference, ISWC 2010, Shanghai, China, November 7-11, 2010, Revised Selected Papers, Part I. Eds.: Peter F. Patel-Schneider et al
  12. Roy, W.; Gray, C.: Preparing existing metadata for repository batch import : a recipe for a fickle food (2018) 0.01
    0.006053502 = product of:
      0.03026751 = sum of:
        0.023806747 = weight(_text_:web in 4550) [ClassicSimilarity], result of:
          0.023806747 = score(doc=4550,freq=4.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.25496176 = fieldWeight in 4550, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4550)
        0.006460763 = product of:
          0.019382289 = sum of:
            0.019382289 = weight(_text_:22 in 4550) [ClassicSimilarity], result of:
              0.019382289 = score(doc=4550,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.19345059 = fieldWeight in 4550, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4550)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    In 2016, the University of Waterloo began offering a mediated copyright review and deposit service to support the growth of our institutional repository UWSpace. This resulted in the need to batch import large lists of published works into the institutional repository quickly and accurately. A range of methods have been proposed for harvesting publications metadata en masse, but many technological solutions can easily become detached from a workflow that is both reproducible for support staff and applicable to a range of situations. Many repositories offer the capacity for batch upload via CSV, so our method provides a template Python script that leverages the Habanero library for populating CSV files with existing metadata retrieved from the CrossRef API. In our case, we have combined this with useful metadata contained in a TSV file downloaded from Web of Science in order to enrich our metadata as well. The appeal of this 'low-maintenance' method is that it provides more robust options for gathering metadata semi-automatically, and only requires the user's ability to access Web of Science and the Python program, while still remaining flexible enough for local customizations.
    Date
    10.11.2018 16:27:22
  13. Hogan, A.; Harth, A.; Umbrich, J.; Kinsella, S.; Polleres, A.; Decker, S.: Searching and browsing Linked Data with SWSE : the Semantic Web Search Engine (2011) 0.01
    0.0058314386 = product of:
      0.058314383 = sum of:
        0.058314383 = weight(_text_:web in 438) [ClassicSimilarity], result of:
          0.058314383 = score(doc=438,freq=24.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.6245262 = fieldWeight in 438, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=438)
      0.1 = coord(1/10)
    
    Abstract
    In this paper, we discuss the architecture and implementation of the Semantic Web Search Engine (SWSE). Following traditional search engine architecture, SWSE consists of crawling, data enhancing, indexing and a user interface for search, browsing and retrieval of information; unlike traditional search engines, SWSE operates over RDF Web data - loosely also known as Linked Data - which implies unique challenges for the system design, architecture, algorithms, implementation and user interface. In particular, many challenges exist in adopting Semantic Web technologies for Web data: the unique challenges of the Web - in terms of scale, unreliability, inconsistency and noise - are largely overlooked by the current Semantic Web standards. Herein, we describe the current SWSE system, initially detailing the architecture and later elaborating upon the function, design, implementation and performance of each individual component. In so doing, we also give an insight into how current Semantic Web standards can be tailored, in a best-effort manner, for use on Web data. Throughout, we offer evaluation and complementary argumentation to support our design choices, and also offer discussion on future directions and open research questions. Later, we also provide candid discussion relating to the difficulties currently faced in bringing such a search engine into the mainstream, and lessons learnt from roughly six years working on the Semantic Web Search Engine project.
    Object
    Semantic Web Search Engine
    Theme
    Semantic Web
  14. O'Neill, E.T.; Bennett, R.; Kammerer, K.: Using authorities to improve subject searches (2012) 0.01
    0.005604797 = product of:
      0.028023984 = sum of:
        0.020200694 = weight(_text_:web in 310) [ClassicSimilarity], result of:
          0.020200694 = score(doc=310,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.21634221 = fieldWeight in 310, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=310)
        0.007823291 = product of:
          0.023469873 = sum of:
            0.023469873 = weight(_text_:29 in 310) [ClassicSimilarity], result of:
              0.023469873 = score(doc=310,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.23319192 = fieldWeight in 310, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=310)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Date
    29. 5.2015 20:57:41
    Source
    Beyond libraries - subject metadata in the digital environment and semantic web. IFLA Satellite Post-Conference, 17-18 August 2012, Tallinn
  15. Vatant, B.: Porting library vocabularies to the Semantic Web, and back : a win-win round trip (2010) 0.01
    0.005344602 = product of:
      0.053446017 = sum of:
        0.053446017 = weight(_text_:web in 3968) [ClassicSimilarity], result of:
          0.053446017 = score(doc=3968,freq=14.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.57238775 = fieldWeight in 3968, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3968)
      0.1 = coord(1/10)
    
    Abstract
    The role of vocabularies is critical in the long overdue synergy between the Web and Library heritage. The Semantic Web should leverage existing vocabularies instead of reinventing them, but the specific features of library vocabularies make them more or less portable to the Semantic Web. Based on preliminary results in the framework of the TELplus project, we suggest guidelines for needed evolutions in order to make vocabularies usable and efficient in the Semantic Web realm, assess choices made so far by large libraries to publish vocabularies conformant to standards and good practices, and review how Semantic Web tools can help managing those vocabularies.
    Theme
    Semantic Web
  16. Auer, S.; Lehmann, J.: Making the Web a data washing machine : creating knowledge out of interlinked data (2010) 0.01
    0.0053233504 = product of:
      0.053233504 = sum of:
        0.053233504 = weight(_text_:web in 112) [ClassicSimilarity], result of:
          0.053233504 = score(doc=112,freq=20.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.5701118 = fieldWeight in 112, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=112)
      0.1 = coord(1/10)
    
    Abstract
    Over the past 3 years, the semantic web activity has gained momentum with the widespread publishing of structured data as RDF. The Linked Data paradigm has therefore evolved from a practical research idea into a very promising candidate for addressing one of the biggest challenges in the area of the Semantic Web vision: the exploitation of the Web as a platform for data and information integration. To translate this initial success into a world-scale reality, a number of research challenges need to be addressed: the performance gap between relational and RDF data management has to be closed, coherence and quality of data published on theWeb have to be improved, provenance and trust on the Linked Data Web must be established and generally the entrance barrier for data publishers and users has to be lowered. In this vision statement we discuss these challenges and argue, that research approaches tackling these challenges should be integrated into a mutual refinement cycle. We also present two crucial use-cases for the widespread adoption of linked data.
    Content
    Vgl.: http://www.semantic-web-journal.net/content/new-submission-making-web-data-washing-machine-creating-knowledge-out-interlinked-data http://www.semantic-web-journal.net/sites/default/files/swj24_0.pdf.
    Source
    Semantic Web journal. 0(2010), no.1
    Theme
    Semantic Web
  17. Gómez-Pérez, A.; Corcho, O.: Ontology languages for the Semantic Web (2015) 0.01
    0.0053233504 = product of:
      0.053233504 = sum of:
        0.053233504 = weight(_text_:web in 3297) [ClassicSimilarity], result of:
          0.053233504 = score(doc=3297,freq=20.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.5701118 = fieldWeight in 3297, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3297)
      0.1 = coord(1/10)
    
    Abstract
    Ontologies have proven to be an essential element in many applications. They are used in agent systems, knowledge management systems, and e-commerce platforms. They can also generate natural language, integrate intelligent information, provide semantic-based access to the Internet, and extract information from texts in addition to being used in many other applications to explicitly declare the knowledge embedded in them. However, not only are ontologies useful for applications in which knowledge plays a key role, but they can also trigger a major change in current Web contents. This change is leading to the third generation of the Web-known as the Semantic Web-which has been defined as "the conceptual structuring of the Web in an explicit machine-readable way."1 This definition does not differ too much from the one used for defining an ontology: "An ontology is an explicit, machinereadable specification of a shared conceptualization."2 In fact, new ontology-based applications and knowledge architectures are developing for this new Web. A common claim for all of these approaches is the need for languages to represent the semantic information that this Web requires-solving the heterogeneous data exchange in this heterogeneous environment. Here, we don't decide which language is best of the Semantic Web. Rather, our goal is to help developers find the most suitable language for their representation needs. The authors analyze the most representative ontology languages created for the Web and compare them using a common framework.
    Theme
    Semantic Web
  18. Wright, H.: Semantic Web and ontologies (2018) 0.01
    0.0052698483 = product of:
      0.05269848 = sum of:
        0.05269848 = weight(_text_:web in 80) [ClassicSimilarity], result of:
          0.05269848 = score(doc=80,freq=10.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.5643819 = fieldWeight in 80, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=80)
      0.1 = coord(1/10)
    
    Abstract
    The Semantic Web and ontologies can help archaeologists combine and share data, making it more open and useful. Archaeologists create diverse types of data, using a wide variety of technologies and methodologies. Like all research domains, these data are increasingly digital. The creation of data that are now openly and persistently available from disparate sources has also inspired efforts to bring archaeological resources together and make them more interoperable. This allows functionality such as federated cross-search across different datasets, and the mapping of heterogeneous data to authoritative structures to build a single data source. Ontologies provide the structure and relationships for Semantic Web data, and have been developed for use in cultural heritage applications generally, and archaeology specifically. A variety of online resources for archaeology now incorporate Semantic Web principles and technologies.
    Theme
    Semantic Web
  19. Martínez-González, M.M.; Alvite-Díez, M.L.: Thesauri and Semantic Web : discussion of the evolution of thesauri toward their integration with the Semantic Web (2019) 0.01
    0.0050501735 = product of:
      0.050501734 = sum of:
        0.050501734 = weight(_text_:web in 5997) [ClassicSimilarity], result of:
          0.050501734 = score(doc=5997,freq=18.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.5408555 = fieldWeight in 5997, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5997)
      0.1 = coord(1/10)
    
    Abstract
    Thesauri are Knowledge Organization Systems (KOS), that arise from the consensus of wide communities. They have been in use for many years and are regularly updated. Whereas in the past thesauri were designed for information professionals for indexing and searching, today there is a demand for conceptual vocabularies that enable inferencing by machines. The development of the Semantic Web has brought a new opportunity for thesauri, but thesauri also face the challenge of proving that they add value to it. The evolution of thesauri toward their integration with the Semantic Web is examined. Elements and structures in the thesaurus standard, ISO 25964, and SKOS (Simple Knowledge Organization System), the Semantic Web standard for representing KOS, are reviewed and compared. Moreover, the integrity rules of thesauri are contrasted with the axioms of SKOS. How SKOS has been applied to represent some real thesauri is taken into account. Three thesauri are chosen for this aim: AGROVOC, EuroVoc and the UNESCO Thesaurus. Based on the results of this comparison and analysis, the benefits that Semantic Web technologies offer to thesauri, how thesauri can contribute to the Semantic Web, and the challenges that would help to improve their integration with the Semantic Web are discussed.
    Theme
    Semantic Web
  20. Glimm, B.; Hogan, A.; Krötzsch, M.; Polleres, A.: OWL: Yet to arrive on the Web of Data? (2012) 0.00
    0.00494814 = product of:
      0.0494814 = sum of:
        0.0494814 = weight(_text_:web in 4798) [ClassicSimilarity], result of:
          0.0494814 = score(doc=4798,freq=12.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.5299281 = fieldWeight in 4798, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4798)
      0.1 = coord(1/10)
    
    Abstract
    Seven years on from OWL becoming a W3C recommendation, and two years on from the more recent OWL 2 W3C recommendation, OWL has still experienced only patchy uptake on the Web. Although certain OWL features (like owl:sameAs) are very popular, other features of OWL are largely neglected by publishers in the Linked Data world. This may suggest that despite the promise of easy implementations and the proposal of tractable profiles suggested in OWL's second version, there is still no "right" standard fragment for the Linked Data community. In this paper, we (1) analyse uptake of OWL on the Web of Data, (2) gain insights into the OWL fragment that is actually used/usable on the Web, where we arrive at the conclusion that this fragment is likely to be a simplified profile based on OWL RL, (3) propose and discuss such a new fragment, which we call OWL LD (for Linked Data).
    Content
    Beitrag des Workshops: Linked Data on the Web (LDOW2012), April 16, 2012 Lyon, France; vgl.: http://events.linkeddata.org/ldow2012/.
    Theme
    Semantic Web

Types

  • a 74
  • s 7
  • r 3
  • x 2
  • m 1
  • More… Less…

Classifications