Search (217 results, page 1 of 11)

  • × type_ss:"el"
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.13
    0.1257631 = product of:
      0.5030524 = sum of:
        0.5030524 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
          0.5030524 = score(doc=1826,freq=2.0), product of:
            0.53704935 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.06334615 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.25 = coord(1/4)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Understanding metadata (2004) 0.11
    0.1134168 = product of:
      0.2268336 = sum of:
        0.15817337 = weight(_text_:objects in 2686) [ClassicSimilarity], result of:
          0.15817337 = score(doc=2686,freq=2.0), product of:
            0.33668926 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06334615 = queryNorm
            0.46979034 = fieldWeight in 2686, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0625 = fieldNorm(doc=2686)
        0.06866023 = weight(_text_:22 in 2686) [ClassicSimilarity], result of:
          0.06866023 = score(doc=2686,freq=2.0), product of:
            0.22182742 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06334615 = queryNorm
            0.30952093 = fieldWeight in 2686, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=2686)
      0.5 = coord(2/4)
    
    Abstract
    Metadata (structured information about an object or collection of objects) is increasingly important to libraries, archives, and museums. And although librarians are familiar with a number of issues that apply to creating and using metadata (e.g., authority control, controlled vocabularies, etc.), the world of metadata is nonetheless different than library cataloging, with its own set of challenges. Therefore, whether you are new to these concepts or quite experienced with classic cataloging, this short (20 pages) introductory paper on metadata can be helpful
    Date
    10. 9.2004 10:22:40
  3. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.10
    0.10061048 = product of:
      0.40244192 = sum of:
        0.40244192 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
          0.40244192 = score(doc=230,freq=2.0), product of:
            0.53704935 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.06334615 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
      0.25 = coord(1/4)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  4. Priss, U.: Faceted knowledge representation (1999) 0.10
    0.09923969 = product of:
      0.19847938 = sum of:
        0.13840169 = weight(_text_:objects in 2654) [ClassicSimilarity], result of:
          0.13840169 = score(doc=2654,freq=2.0), product of:
            0.33668926 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06334615 = queryNorm
            0.41106653 = fieldWeight in 2654, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2654)
        0.0600777 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
          0.0600777 = score(doc=2654,freq=2.0), product of:
            0.22182742 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06334615 = queryNorm
            0.2708308 = fieldWeight in 2654, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2654)
      0.5 = coord(2/4)
    
    Abstract
    Faceted Knowledge Representation provides a formalism for implementing knowledge systems. The basic notions of faceted knowledge representation are "unit", "relation", "facet" and "interpretation". Units are atomic elements and can be abstract elements or refer to external objects in an application. Relations are sequences or matrices of 0 and 1's (binary matrices). Facets are relational structures that combine units and relations. Each facet represents an aspect or viewpoint of a knowledge system. Interpretations are mappings that can be used to translate between different representations. This paper introduces the basic notions of faceted knowledge representation. The formalism is applied here to an abstract modeling of a faceted thesaurus as used in information retrieval.
    Date
    22. 1.2016 17:30:31
  5. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.06
    0.06288155 = product of:
      0.2515262 = sum of:
        0.2515262 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
          0.2515262 = score(doc=4388,freq=2.0), product of:
            0.53704935 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.06334615 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.25 = coord(1/4)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  6. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.06
    0.06288155 = product of:
      0.2515262 = sum of:
        0.2515262 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
          0.2515262 = score(doc=5669,freq=2.0), product of:
            0.53704935 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.06334615 = queryNorm
            0.46834838 = fieldWeight in 5669, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5669)
      0.25 = coord(1/4)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  7. Rauber, A.: Digital preservation in data-driven science : on the importance of process capture, preservation and validation (2012) 0.06
    0.05931501 = product of:
      0.23726004 = sum of:
        0.23726004 = weight(_text_:objects in 469) [ClassicSimilarity], result of:
          0.23726004 = score(doc=469,freq=8.0), product of:
            0.33668926 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06334615 = queryNorm
            0.7046855 = fieldWeight in 469, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=469)
      0.25 = coord(1/4)
    
    Abstract
    Current digital preservation is strongly biased towards data objects: digital files of document-style objects, or encapsulated and largely self-contained objects. To provide authenticity and provenance information, comprehensive metadata models are deployed to document information on an object's context. Yet, we claim that simply documenting an objects context may not be sufficient to ensure proper provenance and to fulfill the stated preservation goals. Specifically in e-Science and business settings, capturing, documenting and preserving entire processes may be necessary to meet the preservation goals. We thus present an approach for capturing, documenting and preserving processes, and means to assess their authenticity upon re-execution. We will discuss options as well as limitations and open challenges to achieve sound preservation, speci?cally within scientific processes.
  8. Faceted classification of information (o.J.) 0.05
    0.04942918 = product of:
      0.19771671 = sum of:
        0.19771671 = weight(_text_:objects in 2653) [ClassicSimilarity], result of:
          0.19771671 = score(doc=2653,freq=2.0), product of:
            0.33668926 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06334615 = queryNorm
            0.58723795 = fieldWeight in 2653, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.078125 = fieldNorm(doc=2653)
      0.25 = coord(1/4)
    
    Abstract
    An explanation of faceted classification meant for people working in knowledge management. An example given for a high-technology company has the fundamental categories Products, Applications, Organizations, People, Domain objects ("technologies applied in the marketplace in which the organization participates"), Events (i.e. time), and Publications.
  9. Koster, L.: Persistent identifiers for heritage objects (2020) 0.05
    0.04942918 = product of:
      0.19771671 = sum of:
        0.19771671 = weight(_text_:objects in 5718) [ClassicSimilarity], result of:
          0.19771671 = score(doc=5718,freq=8.0), product of:
            0.33668926 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06334615 = queryNorm
            0.58723795 = fieldWeight in 5718, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5718)
      0.25 = coord(1/4)
    
    Abstract
    Persistent identifiers (PID's) are essential for getting access and referring to library, archive and museum (LAM) collection objects in a sustainable and unambiguous way, both internally and externally. Heritage institutions need a universal policy for the use of PID's in order to have an efficient digital infrastructure at their disposal and to achieve optimal interoperability, leading to open data, open collections and efficient resource management. Here the discussion is limited to PID's that institutions can assign to objects they own or administer themselves. PID's for people, subjects etc. can be used by heritage institutions, but are generally managed by other parties. The first part of this article consists of a general theoretical description of persistent identifiers. First of all, I discuss the questions of what persistent identifiers are and what they are not, and what is needed to administer and use them. The most commonly used existing PID systems are briefly characterized. Then I discuss the types of objects PID's can be assigned to. This section concludes with an overview of the requirements that apply if PIDs should also be used for linked data. The second part examines current infrastructural practices, and existing PID systems and their advantages and shortcomings. Based on these practical issues and the pros and cons of existing PID systems a list of requirements for PID systems is presented which is used to address a number of practical considerations. This section concludes with a number of recommendations.
  10. Arms, W.Y.; Blanchi, C.; Overly, E.A.: ¬An architecture for information in digital libraries (1997) 0.05
    0.04893239 = product of:
      0.19572955 = sum of:
        0.19572955 = weight(_text_:objects in 1260) [ClassicSimilarity], result of:
          0.19572955 = score(doc=1260,freq=16.0), product of:
            0.33668926 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06334615 = queryNorm
            0.5813359 = fieldWeight in 1260, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1260)
      0.25 = coord(1/4)
    
    Abstract
    Flexible organization of information is one of the key design challenges in any digital library. For the past year, we have been working with members of the National Digital Library Project (NDLP) at the Library of Congress to build an experimental system to organize and store library collections. This is a report on the work. In particular, we describe how a few technical building blocks are used to organize the material in collections, such as the NDLP's, and how these methods fit into a general distributed computing framework. The technical building blocks are part of a framework that evolved as part of the Computer Science Technical Reports Project (CSTR). This framework is described in the paper, "A Framework for Distributed Digital Object Services", by Robert Kahn and Robert Wilensky (1995). The main building blocks are: "digital objects", which are used to manage digital material in a networked environment; "handles", which identify digital objects and other network resources; and "repositories", in which digital objects are stored. These concepts are amplified in "Key Concepts in the Architecture of the Digital Library", by William Y. Arms (1995). In summer 1995, after earlier experimental development, work began on the implementation of a full digital library system based on this framework. In addition to Kahn/Wilensky and Arms, several working papers further elaborate on the design concepts. A paper by Carl Lagoze and David Ely, "Implementation Issues in an Open Architectural Framework for Digital Object Services", delves into some of the repository concepts. The initial repository implementation was based on a paper by Carl Lagoze, Robert McGrath, Ed Overly and Nancy Yeager, "A Design for Inter-Operable Secure Object Stores (ISOS)". Work on the handle system, which began in 1992, is described in a series of papers that can be found on the Handle Home Page. The National Digital Library Program (NDLP) at the Library of Congress is a large scale project to convert historic collections to digital form and make them widely available over the Internet. The program is described in two articles by Caroline R. Arms, "Historical Collections for the National Digital Library". The NDLP itself draws on experience gained through the earlier American Memory Program. Based on this work, we have built a pilot system that demonstrates how digital objects can be used to organize complex materials, such as those found in the NDLP. The pilot was demonstrated to members of the library in July 1996. The pilot system includes the handle system for identifying digital objects, a pilot repository to store them, and two user interfaces: one designed for librarians to manage digital objects in the repository, the other for library patrons to access the materials stored in the repository. Materials from the NDLP's Coolidge Consumerism compilation have been deposited into the pilot repository. They include a variety of photographs and texts, converted to digital form. The pilot demonstrates the use of handles for identifying such material, the use of meta-objects for managing sets of digital objects, and the choice of metadata. We are now implementing an enhanced prototype system for completion in early 1997.
  11. Payette, S.; Blanchi, C.; Lagoze, C.; Overly, E.A.: Interoperability for digital objects and repositories : the Cornell/CNRI experiments (1999) 0.05
    0.048430502 = product of:
      0.19372201 = sum of:
        0.19372201 = weight(_text_:objects in 1248) [ClassicSimilarity], result of:
          0.19372201 = score(doc=1248,freq=12.0), product of:
            0.33668926 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06334615 = queryNorm
            0.5753733 = fieldWeight in 1248, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.03125 = fieldNorm(doc=1248)
      0.25 = coord(1/4)
    
    Abstract
    For several years the Digital Library Research Group at Cornell University and the Corporation for National Research Initiatives (CNRI) have been engaged in research focused on the design and development of infrastructures for open architecture, confederated digital libraries. The goal of this effort is to achieve interoperability and extensibility of digital library systems through the definition of key digital library services and their open interfaces, allowing flexible interaction of existing services and augmentation of the infrastructure with new services. Some aspects of this research have included the development and deployment of the Dienst software, the Handle System®, and the architecture of digital objects and repositories. In this paper, we describe the joint effort by Cornell and CNRI to prototype a rich and deployable architecture for interoperable digital objects and repositories. This effort has challenged us to move theories of interoperability closer to practice. The Cornell/CNRI collaboration builds on two existing projects focusing on the development of interoperable digital libraries. Details relating to the technology of these projects are described elsewhere. Both projects were strongly influenced by the fundamental abstractions of repositories and digital objects as articulated by Kahn and Wilensky in A Framework for Distributed Digital Object Services. Furthermore, both programs were influenced by the container architecture described in the Warwick Framework, and by the notions of distributed dynamic objects presented by Lagoze and Daniel in their Distributed Active Relationship work. With these common roots, one would expect that the CNRI and Cornell repositories would be at least theoretically interoperable. However, the actual test would be the extent to which our independently developed repositories were practically interoperable. This paper focuses on the definition of interoperability in the joint Cornell/CNRI work and the set of experiments conducted to formally test it. Our motivation for this work is the eventual deployment of formally tested reference implementations of the repository architecture for experimentation and development by fellow digital library researchers. In Section 2, we summarize the digital object and repository approach that was the focus of our interoperability experiments. In Section 3, we describe the set of experiments that progressively tested interoperability at increasing levels of functionality. In Section 4, we discuss general conclusions, and in Section 5, we give a preview of our future work, including our plans to evolve our experimentation to the point of defining a set of formal metrics for measuring interoperability for repositories and digital objects. This is still a work in progress that is expected to undergo additional refinements during its development.
  12. Wallis, R.; Isaac, A.; Charles, V.; Manguinhas, H.: Recommendations for the application of Schema.org to aggregated cultural heritage metadata to increase relevance and visibility to search engines : the case of Europeana (2017) 0.04
    0.042806923 = product of:
      0.1712277 = sum of:
        0.1712277 = weight(_text_:objects in 3372) [ClassicSimilarity], result of:
          0.1712277 = score(doc=3372,freq=6.0), product of:
            0.33668926 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06334615 = queryNorm
            0.508563 = fieldWeight in 3372, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3372)
      0.25 = coord(1/4)
    
    Abstract
    Europeana provides access to more than 54 million cultural heritage objects through its portal Europeana Collections. It is crucial for Europeana to be recognized by search engines as a trusted authoritative repository of cultural heritage objects. Indeed, even though its portal is the main entry point, most Europeana users come to it via search engines. Europeana Collections is fuelled by metadata describing cultural objects, represented in the Europeana Data Model (EDM). This paper presents the research and consequent recommendations for publishing Europeana metadata using the Schema.org vocabulary and best practices. Schema.org html embedded metadata to be consumed by search engines to power rich services (such as Google Knowledge Graph). Schema.org is an open and widely adopted initiative (used by over 12 million domains) backed by Google, Bing, Yahoo!, and Yandex, for sharing metadata across the web It underpins the emergence of new web techniques, such as so called Semantic SEO. Our research addressed the representation of the embedded metadata as part of the Europeana HTML pages and sitemaps so that the re-use of this data can be optimized. The practical objective of our work is to produce a Schema.org representation of Europeana resources described in EDM, being the richest as possible and tailored to Europeana's realities and user needs as well the search engines and their users.
  13. METS: an overview & tutorial : Metadata Encoding & Transmission Standard (METS) (2001) 0.04
    0.04194205 = product of:
      0.1677682 = sum of:
        0.1677682 = weight(_text_:objects in 1323) [ClassicSimilarity], result of:
          0.1677682 = score(doc=1323,freq=4.0), product of:
            0.33668926 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06334615 = queryNorm
            0.49828792 = fieldWeight in 1323, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=1323)
      0.25 = coord(1/4)
    
    Abstract
    Maintaining a library of digital objects of necessaryy requires maintaining metadata about those objects. The metadata necessary for successful management and use of digital objeets is both more extensive than and different from the metadata used for managing collections of printed works and other physical materials. While a library may record descriptive metadata regarding a book in its collection, the book will not dissolve into a series of unconnected pages if the library fails to record structural metadata regarding the book's organization, nor will scholars be unable to evaluate the book's worth if the library fails to note that the book was produced using a Ryobi offset press. The Same cannot be said for a digital version of the saure book. Without structural metadata, the page image or text files comprising the digital work are of little use, and without technical metadata regarding the digitization process, scholars may be unsure of how accurate a reflection of the original the digital version provides. For internal management purposes, a library must have access to appropriate technical metadata in order to periodically refresh and migrate the data, ensuring the durability of valuable resources.
  14. O'Neill, E.T.: ¬The FRBRization of Humphry Clinker : a case study in the application of IFLA's Functional Requirements for Bibliographic Records (FRBR) (2002) 0.04
    0.04194205 = product of:
      0.1677682 = sum of:
        0.1677682 = weight(_text_:objects in 2433) [ClassicSimilarity], result of:
          0.1677682 = score(doc=2433,freq=4.0), product of:
            0.33668926 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06334615 = queryNorm
            0.49828792 = fieldWeight in 2433, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=2433)
      0.25 = coord(1/4)
    
    Abstract
    The goal of OCLC's FRBR projects is to examine issues associated with the conversion of a set of bibliographic records to conform to FRBR requirements (a process referred to as "FRBRization"). The goals of this FRBR project were to: - examine issues associated with creating an entity-relationship model for (i.e., "FRBRizing") a non-trivial work - better understand the relationship between the bibliographic records and the bibliographic objects they represent - determine if the information available in the bibliographic record is sufficient to reliably identify the FRBR entities - to develop a data set that could be used to evaluate FRBRization algorithms. Using an exemplary work as a case study, lead scientist Ed O'Neill sought to: - better understand the relationship between bibliographic records and the bibliographic objects they represent - determine if the information available in the bibliographic records is sufficient to reliably identify FRBR entities.
  15. Maaten, L. van den; Hinton, G.: Visualizing non-metric similarities in multiple maps (2012) 0.04
    0.04194205 = product of:
      0.1677682 = sum of:
        0.1677682 = weight(_text_:objects in 3884) [ClassicSimilarity], result of:
          0.1677682 = score(doc=3884,freq=4.0), product of:
            0.33668926 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06334615 = queryNorm
            0.49828792 = fieldWeight in 3884, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=3884)
      0.25 = coord(1/4)
    
    Abstract
    Techniques for multidimensional scaling visualize objects as points in a low-dimensional metric map. As a result, the visualizations are subject to the fundamental limitations of metric spaces. These limitations prevent multidimensional scaling from faithfully representing non-metric similarity data such as word associations or event co-occurrences. In particular, multidimensional scaling cannot faithfully represent intransitive pairwise similarities in a visualization, and it cannot faithfully visualize "central" objects. In this paper, we present an extension of a recently proposed multidimensional scaling technique called t-SNE. The extension aims to address the problems of traditional multidimensional scaling techniques when these techniques are used to visualize non-metric similarities. The new technique, called multiple maps t-SNE, alleviates these problems by constructing a collection of maps that reveal complementary structure in the similarity data. We apply multiple maps t-SNE to a large data set of word association data and to a data set of NIPS co-authorships, demonstrating its ability to successfully visualize non-metric similarities.
  16. Dhillon, P.; Singh, M.: ¬An extended ontology model for trust evaluation using advanced hybrid ontology (2023) 0.04
    0.04194205 = product of:
      0.1677682 = sum of:
        0.1677682 = weight(_text_:objects in 981) [ClassicSimilarity], result of:
          0.1677682 = score(doc=981,freq=4.0), product of:
            0.33668926 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06334615 = queryNorm
            0.49828792 = fieldWeight in 981, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=981)
      0.25 = coord(1/4)
    
    Abstract
    In the blooming area of Internet technology, the concept of Internet-of-Things (IoT) holds a distinct position that interconnects a large number of smart objects. In the context of social IoT (SIoT), the argument of trust and reliability is evaluated in the presented work. The proposed framework is divided into two blocks, namely Verification Block (VB) and Evaluation Block (EB). VB defines various ontology-based relationships computed for the objects that reflect the security and trustworthiness of an accessed service. While, EB is used for the feedback analysis and proves to be a valuable step that computes and governs the success rate of the service. Support vector machine (SVM) is applied to categorise the trust-based evaluation. The security aspect of the proposed approach is comparatively evaluated for DDoS and malware attacks in terms of success rate, trustworthiness and execution time. The proposed secure ontology-based framework provides better performance compared with existing architectures.
  17. Hodson, H.: Google's fact-checking bots build vast knowledge bank (2014) 0.04
    0.03954334 = product of:
      0.15817337 = sum of:
        0.15817337 = weight(_text_:objects in 1700) [ClassicSimilarity], result of:
          0.15817337 = score(doc=1700,freq=2.0), product of:
            0.33668926 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06334615 = queryNorm
            0.46979034 = fieldWeight in 1700, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0625 = fieldNorm(doc=1700)
      0.25 = coord(1/4)
    
    Abstract
    The search giant is automatically building Knowledge Vault, a massive database that could give us unprecedented access to the world's facts GOOGLE is building the largest store of knowledge in human history - and it's doing so without any human help. Instead, Knowledge Vault autonomously gathers and merges information from across the web into a single base of facts about the world, and the people and objects in it.
  18. Dobratz, S.; Neuroth, H.: nestor: Network of Expertise in long-term STOrage of digital Resources : a digital preservation initiative for Germany (2004) 0.04
    0.039233197 = product of:
      0.15693279 = sum of:
        0.15693279 = weight(_text_:objects in 1195) [ClassicSimilarity], result of:
          0.15693279 = score(doc=1195,freq=14.0), product of:
            0.33668926 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06334615 = queryNorm
            0.4661057 = fieldWeight in 1195, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1195)
      0.25 = coord(1/4)
    
    Abstract
    As follow up, in 2002 the nestor long-term archiving working group provided an initial spark towards planning and organising coordinated activities concerning the long-term preservation and long-term availability of digital documents in Germany. This resulted in a workshop, held 29 - 30 October 2002, where major tasks were discussed. Influenced by the demands and progress of the nestor network, the participants reached agreement to start work on application-oriented projects and to address the following topics: * Overlapping problems o Collection and preservation of digital objects (selection criteria, preservation policy) o Definition of criteria for trusted repositories o Creation of models of cooperation, etc. * Digital objects production process o Analysis of potential conflicts between production and long-term preservation o Documentation of existing document models and recommendations for standards models to be used for long-term preservation o Identification systems for digital objects, etc. * Transfer of digital objects o Object data and metadata o Transfer protocols and interoperability o Handling of different document types, e.g. dynamic publications, etc. * Long-term preservation of digital objects o Design and prototype implementation of depot systems for digital objects (OAIS was chosen to be the best functional model.) o Authenticity o Functional requirements on user interfaces of an depot system o Identification systems for digital objects, etc. At the end of the workshop, participants decided to establish a permanent distributed infrastructure for long-term preservation and long-term accessibility of digital resources in Germany comparable, e.g., to the Digital Preservation Coalition in the UK. The initial phase, nestor, is now being set up by the above-mentioned 3-year funding project.
  19. Isaac, A.: Aligning thesauri for an integrated access to Cultural Heritage Resources (2007) 0.04
    0.038684454 = product of:
      0.15473782 = sum of:
        0.15473782 = weight(_text_:objects in 553) [ClassicSimilarity], result of:
          0.15473782 = score(doc=553,freq=10.0), product of:
            0.33668926 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06334615 = queryNorm
            0.4595864 = fieldWeight in 553, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.02734375 = fieldNorm(doc=553)
      0.25 = coord(1/4)
    
    Abstract
    Currently, a number of efforts are being carried out to integrate collections from different institutions and containing heterogeneous material. Examples of such projects are The European Library [1] and the Memory of the Netherlands [2]. A crucial point for the success of these is the availability to provide a unified access on top of the different collections, e.g. using one single vocabulary for querying or browsing the objects they contain. This is made difficult by the fact that the objects from different collections are often described using different vocabularies - thesauri, classification schemes - and are therefore not interoperable at the semantic level. To solve this problem, one can turn to semantic links - mappings - between the elements of the different vocabularies. If one knows that a concept C from a vocabulary V is semantically equivalent to a concept to a concept D from vocabulary W, then an appropriate search engine can return all the objects that were indexed against D for a query for objects described using C. We thus have an access to other collections, using a single one vocabulary. This is however an ideal situation, and hard alignment work is required to reach it. Several projects in the past have tried to implement such a solution, like MACS [3] and Renardus [4]. They have demonstrated very interesting results, but also highlighted the difficulty of aligning manually all the different vocabularies involved in practical cases, which sometimes contain hundreds of thousands of concepts. To alleviate this problem, a number of tools have been proposed in order to provide with candidate mappings between two input vocabularies, making alignment a (semi-) automatic task. Recently, the Semantic Web community has produced a lot of these alignment tools'. Several techniques are found, depending on the material they exploit: labels of concepts, structure of vocabularies, collection objects and external knowledge sources. Throughout our presentation, we will present a concrete heterogeneity case where alignment techniques have been applied to build a (pilot) browser, developed in the context of the STITCH project [5]. This browser enables a unified access to two collections of illuminated manuscripts, using the description vocabulary used in the first collection, Mandragore [6], or the one used by the second, Iconclass [7]. In our talk, we will also make the point for using unified representations the vocabulary semantic and lexical information. Additionally to ease the use of the alignment tools that have these vocabularies as input, turning to a standard representation format helps designing applications that are more generic, like the browser we demonstrate. We give pointers to SKOS [8], an open and web-enabled format currently developed by the Semantic Web community.
  20. Dietze, S.; Maynard, D.; Demidova, E.; Risse, T.; Stavrakas, Y.: Entity extraction and consolidation for social Web content preservation (2012) 0.03
    0.034951705 = product of:
      0.13980682 = sum of:
        0.13980682 = weight(_text_:objects in 470) [ClassicSimilarity], result of:
          0.13980682 = score(doc=470,freq=4.0), product of:
            0.33668926 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06334615 = queryNorm
            0.41523993 = fieldWeight in 470, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0390625 = fieldNorm(doc=470)
      0.25 = coord(1/4)
    
    Abstract
    With the rapidly increasing pace at which Web content is evolving, particularly social media, preserving the Web and its evolution over time becomes an important challenge. Meaningful analysis of Web content lends itself to an entity-centric view to organise Web resources according to the information objects related to them. Therefore, the crucial challenge is to extract, detect and correlate entities from a vast number of heterogeneous Web resources where the nature and quality of the content may vary heavily. While a wealth of information extraction tools aid this process, we believe that, the consolidation of automatically extracted data has to be treated as an equally important step in order to ensure high quality and non-ambiguity of generated data. In this paper we present an approach which is based on an iterative cycle exploiting Web data for (1) targeted archiving/crawling of Web objects, (2) entity extraction, and detection, and (3) entity correlation. The long-term goal is to preserve Web content over time and allow its navigation and analysis based on well-formed structured RDF data about entities.

Years

Languages

  • e 124
  • d 86
  • el 2
  • a 1
  • f 1
  • nl 1
  • More… Less…

Types

  • a 111
  • i 10
  • m 5
  • s 5
  • r 3
  • b 2
  • n 1
  • x 1
  • More… Less…