Search (330 results, page 1 of 17)

  • × type_ss:"el"
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.17
    0.16578756 = product of:
      0.41446888 = sum of:
        0.10361722 = product of:
          0.31085166 = sum of:
            0.31085166 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.31085166 = score(doc=1826,freq=2.0), product of:
                0.33185944 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.039143547 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
        0.31085166 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.31085166 = score(doc=1826,freq=2.0), product of:
            0.33185944 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039143547 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.4 = coord(2/5)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.13
    0.13263004 = product of:
      0.3315751 = sum of:
        0.082893774 = product of:
          0.24868132 = sum of:
            0.24868132 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.24868132 = score(doc=230,freq=2.0), product of:
                0.33185944 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.039143547 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
        0.24868132 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.24868132 = score(doc=230,freq=2.0), product of:
            0.33185944 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039143547 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
      0.4 = coord(2/5)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  3. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.08
    0.08289378 = product of:
      0.20723444 = sum of:
        0.05180861 = product of:
          0.15542583 = sum of:
            0.15542583 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.15542583 = score(doc=4388,freq=2.0), product of:
                0.33185944 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.039143547 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
        0.15542583 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.15542583 = score(doc=4388,freq=2.0), product of:
            0.33185944 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039143547 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.4 = coord(2/5)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  4. Assem, M. van; Gangemi, A.; Schreiber, G.: Conversion of WordNet to a standard RDF/OWL representation (2006) 0.08
    0.08261948 = product of:
      0.20654869 = sum of:
        0.19049403 = weight(_text_:conversion in 4641) [ClassicSimilarity], result of:
          0.19049403 = score(doc=4641,freq=8.0), product of:
            0.23715246 = queryWeight, product of:
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.039143547 = queryNorm
            0.80325556 = fieldWeight in 4641, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.046875 = fieldNorm(doc=4641)
        0.016054653 = product of:
          0.032109305 = sum of:
            0.032109305 = weight(_text_:29 in 4641) [ClassicSimilarity], result of:
              0.032109305 = score(doc=4641,freq=2.0), product of:
                0.13769476 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.039143547 = queryNorm
                0.23319192 = fieldWeight in 4641, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4641)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper presents an overview of the work in progress at the W3C to produce a standard conversion of WordNet to the RDF/OWL representation language in use in the SemanticWeb community. Such a standard representation is useful to provide application developers a high-quality resource and to promote interoperability. Important requirements in this conversion process are that it should be complete and should stay close to WordNet's conceptual model. The paper explains the steps taken to produce the conversion and details design decisions such as the composition of the class hierarchy and properties, the addition of suitable OWL semantics and the chosen format of the URIs. Additional topics include a strategy to incorporate OWL and RDFS semantics in one schema such that both RDF(S) infrastructure and OWL infrastructure can interpret the information correctly, problems encountered in understanding the Prolog source files and the description of the two versions that are provided (Basic and Full) to accommodate different usages of WordNet.
    Date
    29. 7.2011 14:44:56
  5. Assem, M. van; Malaisé, V.; Miles, A.; Schreiber, G.: ¬A method to convert thesauri to SKOS (2006) 0.06
    0.06030171 = product of:
      0.15075427 = sum of:
        0.13469961 = weight(_text_:conversion in 4642) [ClassicSimilarity], result of:
          0.13469961 = score(doc=4642,freq=4.0), product of:
            0.23715246 = queryWeight, product of:
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.039143547 = queryNorm
            0.56798744 = fieldWeight in 4642, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.046875 = fieldNorm(doc=4642)
        0.016054653 = product of:
          0.032109305 = sum of:
            0.032109305 = weight(_text_:29 in 4642) [ClassicSimilarity], result of:
              0.032109305 = score(doc=4642,freq=2.0), product of:
                0.13769476 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.039143547 = queryNorm
                0.23319192 = fieldWeight in 4642, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4642)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Thesauri can be useful resources for indexing and retrieval on the Semantic Web, but often they are not published in RDF/OWL. To convert thesauri to RDF for use in Semantic Web applications and to ensure the quality and utility of the conversion a structured method is required. Moreover, if different thesauri are to be interoperable without complicated mappings, a standard schema for thesauri is required. This paper presents a method for conversion of thesauri to the SKOS RDF/OWL schema, which is a proposal for such a standard under development by W3Cs Semantic Web Best Practices Working Group. We apply the method to three thesauri: IPSV, GTAA and MeSH. With these case studies we evaluate our method and the applicability of SKOS for representing thesauri.
    Date
    29. 7.2011 14:44:56
  6. Assem, M. van; Menken, M.R.; Schreiber, G.; Wielemaker, J.; Wielinga, B.: ¬A method for converting thesauri to RDF/OWL (2004) 0.05
    0.051940776 = product of:
      0.12985194 = sum of:
        0.11112151 = weight(_text_:conversion in 4644) [ClassicSimilarity], result of:
          0.11112151 = score(doc=4644,freq=2.0), product of:
            0.23715246 = queryWeight, product of:
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.039143547 = queryNorm
            0.46856573 = fieldWeight in 4644, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4644)
        0.018730428 = product of:
          0.037460856 = sum of:
            0.037460856 = weight(_text_:29 in 4644) [ClassicSimilarity], result of:
              0.037460856 = score(doc=4644,freq=2.0), product of:
                0.13769476 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.039143547 = queryNorm
                0.27205724 = fieldWeight in 4644, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4644)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper describes a method for converting existing thesauri and related resources from their native format to RDF(S) and OWL. The method identifies four steps in the conversion process. In each step, decisions have to be taken with respect to the syntax or semantics of the resulting representation. Each step is supported through a number of guidelines. The method is illustrated through conversions of two large thesauri: MeSH and WordNet.
    Date
    29. 7.2011 14:44:56
  7. Assem, M. van: Converting and integrating vocabularies for the Semantic Web (2010) 0.05
    0.04827395 = product of:
      0.12068488 = sum of:
        0.109981775 = weight(_text_:conversion in 4639) [ClassicSimilarity], result of:
          0.109981775 = score(doc=4639,freq=6.0), product of:
            0.23715246 = queryWeight, product of:
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.039143547 = queryNorm
            0.4637598 = fieldWeight in 4639, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.03125 = fieldNorm(doc=4639)
        0.010703102 = product of:
          0.021406204 = sum of:
            0.021406204 = weight(_text_:29 in 4639) [ClassicSimilarity], result of:
              0.021406204 = score(doc=4639,freq=2.0), product of:
                0.13769476 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.039143547 = queryNorm
                0.15546128 = fieldWeight in 4639, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4639)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This thesis focuses on conversion of vocabularies for representation and integration of collections on the Semantic Web. A secondary focus is how to represent metadata schemas (RDF Schemas representing metadata element sets) such that they interoperate with vocabularies. The primary domain in which we operate is that of cultural heritage collections. The background worldview in which a solution is sought is that of the Semantic Web research paradigmwith its associated theories, methods, tools and use cases. In other words, we assume the SemanticWeb is in principle able to provide the context to realize interoperable collections. Interoperability is dependent on the interplay between representations and the applications that use them. We mean applications in the widest sense, such as "search" and "annotation". These applications or tasks are often present in software applications, such as the E-Culture application. It is therefore necessary that applications requirements on the vocabulary representation are met. This leads us to formulate the following problem statement: HOW CAN EXISTING VOCABULARIES BE MADE AVAILABLE TO SEMANTIC WEB APPLICATIONS?
    We refine the problem statement into three research questions. The first two focus on the problem of conversion of a vocabulary to a Semantic Web representation from its original format. Conversion of a vocabulary to a representation in a Semantic Web language is necessary to make the vocabulary available to SemanticWeb applications. In the last question we focus on integration of collection metadata schemas in a way that allows for vocabulary representations as produced by our methods. Academisch proefschrift ter verkrijging van de graad Doctor aan de Vrije Universiteit Amsterdam, Dutch Research School for Information and Knowledge Systems.
    Date
    29. 7.2011 14:44:56
  8. Assem, M. van; Rijgersberg, H.; Wigham, M.; Top, J.: Converting and annotating quantitative data tables (2010) 0.04
    0.037100557 = product of:
      0.09275139 = sum of:
        0.07937251 = weight(_text_:conversion in 4705) [ClassicSimilarity], result of:
          0.07937251 = score(doc=4705,freq=2.0), product of:
            0.23715246 = queryWeight, product of:
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.039143547 = queryNorm
            0.3346898 = fieldWeight in 4705, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4705)
        0.013378878 = product of:
          0.026757756 = sum of:
            0.026757756 = weight(_text_:29 in 4705) [ClassicSimilarity], result of:
              0.026757756 = score(doc=4705,freq=2.0), product of:
                0.13769476 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.039143547 = queryNorm
                0.19432661 = fieldWeight in 4705, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4705)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Companies, governmental agencies and scientists produce a large amount of quantitative (research) data, consisting of measurements ranging from e.g. the surface temperatures of an ocean to the viscosity of a sample of mayonnaise. Such measurements are stored in tables in e.g. spreadsheet files and research reports. To integrate and reuse such data, it is necessary to have a semantic description of the data. However, the notation used is often ambiguous, making automatic interpretation and conversion to RDF or other suitable format diffiult. For example, the table header cell "f(Hz)" refers to frequency measured in Hertz, but the symbol "f" can also refer to the unit farad or the quantities force or luminous flux. Current annotation tools for this task either work on less ambiguous data or perform a more limited task. We introduce new disambiguation strategies based on an ontology, which allows to improve performance on "sloppy" datasets not yet targeted by existing systems.
    Date
    29. 7.2011 14:44:56
  9. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.03
    0.026074996 = product of:
      0.06518749 = sum of:
        0.05180861 = product of:
          0.15542583 = sum of:
            0.15542583 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.15542583 = score(doc=5669,freq=2.0), product of:
                0.33185944 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.039143547 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
        0.013378878 = product of:
          0.026757756 = sum of:
            0.026757756 = weight(_text_:29 in 5669) [ClassicSimilarity], result of:
              0.026757756 = score(doc=5669,freq=2.0), product of:
                0.13769476 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.039143547 = queryNorm
                0.19432661 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  10. Scott, M.L.: Conversion tables : LC-Dewey / Dewey-LC (1993) 0.03
    0.025399202 = product of:
      0.12699601 = sum of:
        0.12699601 = weight(_text_:conversion in 1216) [ClassicSimilarity], result of:
          0.12699601 = score(doc=1216,freq=2.0), product of:
            0.23715246 = queryWeight, product of:
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.039143547 = queryNorm
            0.5355037 = fieldWeight in 1216, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.0625 = fieldNorm(doc=1216)
      0.2 = coord(1/5)
    
  11. Blanchi, C.; Petrone, J.: Distributed interoperable metadata registry (2001) 0.02
    0.022224303 = product of:
      0.11112151 = sum of:
        0.11112151 = weight(_text_:conversion in 1228) [ClassicSimilarity], result of:
          0.11112151 = score(doc=1228,freq=2.0), product of:
            0.23715246 = queryWeight, product of:
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.039143547 = queryNorm
            0.46856573 = fieldWeight in 1228, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1228)
      0.2 = coord(1/5)
    
    Abstract
    Interoperability between digital libraries depends on effective sharing of metadata. Successful sharing of metadata requires common standards for metadata exchange. Previous efforts have focused on either defining a single metadata standard, such as Dublin Core, or building digital library middleware, such as Z39.50 or Stanford's Digital Library Interoperability Protocol. In this article, we propose a distributed architecture for managing metadata and metadata schema. Instead of normalizing all metadata and schema to a single format, we have focused on building a middleware framework that tolerates heterogeneity. By providing facilities for typing and dynamic conversion of metadata, our system permits continual introduction of new forms of metadata with minimal impact on compatibility.
  12. Hasund Langballe, A.M.; Bell, B.: National bibliographies and the International Conference on National Bibliographic Services Recommendations : Europe; North, Central and South America; and Oceania (2001) 0.02
    0.019254657 = product of:
      0.09627328 = sum of:
        0.09627328 = product of:
          0.19254656 = sum of:
            0.19254656 = weight(_text_:europe in 6901) [ClassicSimilarity], result of:
              0.19254656 = score(doc=6901,freq=2.0), product of:
                0.23842667 = queryWeight, product of:
                  6.091085 = idf(docFreq=271, maxDocs=44218)
                  0.039143547 = queryNorm
                0.8075714 = fieldWeight in 6901, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.091085 = idf(docFreq=271, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6901)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
  13. Fischer, D.H.: Converting a thesaurus to OWL : Notes on the paper "The National Cancer Institute's Thesaurus and Ontology" (2004) 0.02
    0.019246811 = product of:
      0.09623405 = sum of:
        0.09623405 = weight(_text_:conversion in 2362) [ClassicSimilarity], result of:
          0.09623405 = score(doc=2362,freq=6.0), product of:
            0.23715246 = queryWeight, product of:
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.039143547 = queryNorm
            0.40578982 = fieldWeight in 2362, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2362)
      0.2 = coord(1/5)
    
    Abstract
    The paper analysed here is a kind of position paper. In order to get a better under-standing of the reported work I used the retrieval interface of the thesaurus, the so-called NCI DTS Browser accessible via the Web3, and I perused the cited OWL file4 with numerous "Find" and "Find next" string searches. In addition the file was im-ported into Protégé 2000, Release 2.0, with OWL Plugin 1.0 and Racer Plugin 1.7.14. At the end of the paper's introduction the authors say: "In the following sections, this paper will describe the terminology development process at NCI, and the issues associated with converting a description logic based nomenclature to a semantically rich OWL ontology." While I will not deal with the first part, i.e. the terminology development process at NCI, I do not see the thesaurus as a description logic based nomenclature, or its cur-rent state and conversion already result in a "rich" OWL ontology. What does "rich" mean here? According to my view there is a great quantity of concepts and links but a very poor description logic structure which enables inferences. And what does the fol-lowing really mean, which is said a few lines previously: "Although editors have defined a number of named ontologic relations to support the description-logic based structure of the Thesaurus, additional relation-ships are considered for inclusion as required to support dependent applications."
    According to my findings several relations available in the thesaurus query interface as "roles", are not used, i.e. there are not yet any assertions with them. And those which are used do not contribute to complete concept definitions of concepts which represent thesaurus main entries. In other words: The authors claim to already have a "description logic based nomenclature", where there is not yet one which deserves that title by being much more than a thesaurus with strict subsumption and additional inheritable semantic links. In the last section of the paper the authors say: "The most time consuming process in this conversion was making a careful analysis of the Thesaurus to understand the best way to translate it into OWL." "For other conversions, these same types of distinctions and decisions must be made. The expressive power of a proprietary encoding can vary widely from that in OWL or RDF. Understanding the original semantics and engineering a solution that most closely duplicates it is critical for creating a useful and accu-rate ontology." My question is: What decisions were made and are they exemplary, can they be rec-ommended as "the best way"? I raise strong doubts with respect to that, and I miss more profound discussions of the issues at stake. The following notes are dedicated to a critical description and assessment of the results of that conversion activity. They are written in a tutorial style more or less addressing students, but myself being a learner especially in the field of medical knowledge representation I do not speak "ex cathedra".
  14. O'Neill, E.T.: ¬The FRBRization of Humphry Clinker : a case study in the application of IFLA's Functional Requirements for Bibliographic Records (FRBR) (2002) 0.02
    0.019049404 = product of:
      0.095247015 = sum of:
        0.095247015 = weight(_text_:conversion in 2433) [ClassicSimilarity], result of:
          0.095247015 = score(doc=2433,freq=2.0), product of:
            0.23715246 = queryWeight, product of:
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.039143547 = queryNorm
            0.40162778 = fieldWeight in 2433, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.046875 = fieldNorm(doc=2433)
      0.2 = coord(1/5)
    
    Abstract
    The goal of OCLC's FRBR projects is to examine issues associated with the conversion of a set of bibliographic records to conform to FRBR requirements (a process referred to as "FRBRization"). The goals of this FRBR project were to: - examine issues associated with creating an entity-relationship model for (i.e., "FRBRizing") a non-trivial work - better understand the relationship between the bibliographic records and the bibliographic objects they represent - determine if the information available in the bibliographic record is sufficient to reliably identify the FRBR entities - to develop a data set that could be used to evaluate FRBRization algorithms. Using an exemplary work as a case study, lead scientist Ed O'Neill sought to: - better understand the relationship between bibliographic records and the bibliographic objects they represent - determine if the information available in the bibliographic records is sufficient to reliably identify FRBR entities.
  15. Klic, L.; Miller, M.; Nelson, J.K.; Pattuelli, C.; Provo, A.: ¬The drawings of the Florentine painters : from print catalog to linked open data (2017) 0.02
    0.019049404 = product of:
      0.095247015 = sum of:
        0.095247015 = weight(_text_:conversion in 4105) [ClassicSimilarity], result of:
          0.095247015 = score(doc=4105,freq=2.0), product of:
            0.23715246 = queryWeight, product of:
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.039143547 = queryNorm
            0.40162778 = fieldWeight in 4105, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.046875 = fieldNorm(doc=4105)
      0.2 = coord(1/5)
    
    Abstract
    The Drawings of The Florentine Painters project created the first online database of Florentine Renaissance drawings by applying Linked Open Data (LOD) techniques to a foundational text of the same name, first published by Bernard Berenson in 1903 (revised and expanded editions, 1938 and 1961). The goal was to make Berenson's catalog information-still an essential information resource today-available in a machine-readable format, allowing researchers to access the source content through open data services. This paper provides a technical overview of the methods and processes applied in the conversion of Berenson's catalog to LOD using the CIDOC-CRM ontology; it also discusses the different phases of the project, focusing on the challenges and issues of data transformation and publishing. The project was funded by the Samuel H. Kress Foundation and organized by Villa I Tatti, The Harvard University Center for Italian Renaissance Studies. Catalog: http://florentinedrawings.itatti.harvard.edu. Data Endpoint: http://data.itatti.harvard.edu.
  16. Danskin, A.; Gryspeerdt, K.: Changing the Rules? : RDA and cataloguing in Europe. (2014) 0.02
    0.018153464 = product of:
      0.09076732 = sum of:
        0.09076732 = product of:
          0.18153463 = sum of:
            0.18153463 = weight(_text_:europe in 5137) [ClassicSimilarity], result of:
              0.18153463 = score(doc=5137,freq=4.0), product of:
                0.23842667 = queryWeight, product of:
                  6.091085 = idf(docFreq=271, maxDocs=44218)
                  0.039143547 = queryNorm
                0.7613856 = fieldWeight in 5137, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.091085 = idf(docFreq=271, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5137)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    This paper provides an overview of plans to implement RDA: Resource Description & Access in Europe to replace existing cataloguing rules. It is based on survey information gathered by EURIG and CILIP CIG. It includes background on the development of RDA as a replacement for AACR2.
  17. Woldering, B.: Connecting with users : Europe and multilinguality (2006) 0.02
    0.018153464 = product of:
      0.09076732 = sum of:
        0.09076732 = product of:
          0.18153463 = sum of:
            0.18153463 = weight(_text_:europe in 5032) [ClassicSimilarity], result of:
              0.18153463 = score(doc=5032,freq=4.0), product of:
                0.23842667 = queryWeight, product of:
                  6.091085 = idf(docFreq=271, maxDocs=44218)
                  0.039143547 = queryNorm
                0.7613856 = fieldWeight in 5032, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.091085 = idf(docFreq=271, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5032)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    This paper introduces to the new Internet service The European Library, provided by the Conference of European National Librarians (CENL), and gives an overview of activities in Europe for multilingual library services, developed and tested in various projects: TEL-ME-MOR, MACS (Multilingual Access to Subjects), MSAC (Multilingual Subject Access to Catalogues of National Libraries), Crisscross, and VIAF (Virtual International Authority File).
  18. Franke, F.: ¬Das Framework for Information Literacy : neue Impulse für die Förderung von Informationskompetenz in Deutschland?! (2017) 0.02
    0.017487083 = product of:
      0.043717705 = sum of:
        0.027807476 = product of:
          0.055614952 = sum of:
            0.055614952 = weight(_text_:29 in 2248) [ClassicSimilarity], result of:
              0.055614952 = score(doc=2248,freq=6.0), product of:
                0.13769476 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.039143547 = queryNorm
                0.40390027 = fieldWeight in 2248, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2248)
          0.5 = coord(1/2)
        0.01591023 = product of:
          0.03182046 = sum of:
            0.03182046 = weight(_text_:22 in 2248) [ClassicSimilarity], result of:
              0.03182046 = score(doc=2248,freq=2.0), product of:
                0.13707404 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039143547 = queryNorm
                0.23214069 = fieldWeight in 2248, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2248)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    https://www.o-bib.de/article/view/2017H4S22-29. DOI: https://doi.org/10.5282/o-bib/2017H4S22-29.
    Source
    o-bib: Das offene Bibliotheksjournal. 4(2017) Nr.4, S.22-29
  19. Somers, J.: Torching the modern-day library of Alexandria : somewhere at Google there is a database containing 25 million books and nobody is allowed to read them. (2017) 0.02
    0.017079165 = product of:
      0.08539583 = sum of:
        0.08539583 = sum of:
          0.064182185 = weight(_text_:europe in 3608) [ClassicSimilarity], result of:
            0.064182185 = score(doc=3608,freq=2.0), product of:
              0.23842667 = queryWeight, product of:
                6.091085 = idf(docFreq=271, maxDocs=44218)
                0.039143547 = queryNorm
              0.26919046 = fieldWeight in 3608, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.091085 = idf(docFreq=271, maxDocs=44218)
                0.03125 = fieldNorm(doc=3608)
          0.021213641 = weight(_text_:22 in 3608) [ClassicSimilarity], result of:
            0.021213641 = score(doc=3608,freq=2.0), product of:
              0.13707404 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.039143547 = queryNorm
              0.15476047 = fieldWeight in 3608, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=3608)
      0.2 = coord(1/5)
    
    Abstract
    You were going to get one-click access to the full text of nearly every book that's ever been published. Books still in print you'd have to pay for, but everything else-a collection slated to grow larger than the holdings at the Library of Congress, Harvard, the University of Michigan, at any of the great national libraries of Europe-would have been available for free at terminals that were going to be placed in every local library that wanted one. At the terminal you were going to be able to search tens of millions of books and read every page of any book you found. You'd be able to highlight passages and make annotations and share them; for the first time, you'd be able to pinpoint an idea somewhere inside the vastness of the printed record, and send somebody straight to it with a link. Books would become as instantly available, searchable, copy-pasteable-as alive in the digital world-as web pages. It was to be the realization of a long-held dream. "The universal library has been talked about for millennia," Richard Ovenden, the head of Oxford's Bodleian Libraries, has said. "It was possible to think in the Renaissance that you might be able to amass the whole of published knowledge in a single room or a single institution." In the spring of 2011, it seemed we'd amassed it in a terminal small enough to fit on a desk. "This is a watershed event and can serve as a catalyst for the reinvention of education, research, and intellectual life," one eager observer wrote at the time. On March 22 of that year, however, the legal agreement that would have unlocked a century's worth of books and peppered the country with access terminals to a universal library was rejected under Rule 23(e)(2) of the Federal Rules of Civil Procedure by the U.S. District Court for the Southern District of New York. When the library at Alexandria burned it was said to be an "international catastrophe." When the most significant humanities project of our time was dismantled in court, the scholars, archivists, and librarians who'd had a hand in its undoing breathed a sigh of relief, for they believed, at the time, that they had narrowly averted disaster.
  20. Hartmann, S.; Haffner, A.: Linked-RDA-Data in der Praxis (2010) 0.02
    0.01704794 = product of:
      0.042619847 = sum of:
        0.021406204 = product of:
          0.042812407 = sum of:
            0.042812407 = weight(_text_:29 in 1679) [ClassicSimilarity], result of:
              0.042812407 = score(doc=1679,freq=2.0), product of:
                0.13769476 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.039143547 = queryNorm
                0.31092256 = fieldWeight in 1679, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1679)
          0.5 = coord(1/2)
        0.021213641 = product of:
          0.042427283 = sum of:
            0.042427283 = weight(_text_:22 in 1679) [ClassicSimilarity], result of:
              0.042427283 = score(doc=1679,freq=2.0), product of:
                0.13707404 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039143547 = queryNorm
                0.30952093 = fieldWeight in 1679, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1679)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    Vortrag, anlässlich der SWIB 2010, 29./30.11.2010 in Köln.
    Date
    13. 2.2011 20:22:23

Years

Languages

  • e 166
  • d 157
  • el 2
  • a 1
  • nl 1
  • More… Less…

Types

  • a 155
  • i 20
  • m 7
  • b 4
  • p 3
  • r 3
  • s 3
  • x 2
  • n 1
  • More… Less…