Search (390 results, page 1 of 20)

  • × type_ss:"el"
  1. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.21
    0.21031275 = product of:
      0.52578187 = sum of:
        0.43241197 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
          0.43241197 = score(doc=230,freq=2.0), product of:
            0.5770437 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.06806357 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
        0.09336991 = weight(_text_:7 in 230) [ClassicSimilarity], result of:
          0.09336991 = score(doc=230,freq=4.0), product of:
            0.22547886 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.06806357 = queryNorm
            0.41409606 = fieldWeight in 230, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
      0.4 = coord(2/5)
    
    Date
    7. 5.2021 15:58:07
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  2. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.12
    0.12460863 = product of:
      0.31152156 = sum of:
        0.2702575 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
          0.2702575 = score(doc=4388,freq=2.0), product of:
            0.5770437 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.06806357 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.041264053 = weight(_text_:7 in 4388) [ClassicSimilarity], result of:
          0.041264053 = score(doc=4388,freq=2.0), product of:
            0.22547886 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.06806357 = queryNorm
            0.18300632 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.4 = coord(2/5)
    
    Date
    7. 8.2018 12:05:42
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  3. Rauber, A.: Digital preservation in data-driven science : on the importance of process capture, preservation and validation (2012) 0.12
    0.121778324 = product of:
      0.3044458 = sum of:
        0.25492895 = weight(_text_:objects in 469) [ClassicSimilarity], result of:
          0.25492895 = score(doc=469,freq=8.0), product of:
            0.36176273 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06806357 = queryNorm
            0.7046855 = fieldWeight in 469, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=469)
        0.049516868 = weight(_text_:7 in 469) [ClassicSimilarity], result of:
          0.049516868 = score(doc=469,freq=2.0), product of:
            0.22547886 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.06806357 = queryNorm
            0.21960759 = fieldWeight in 469, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.046875 = fieldNorm(doc=469)
      0.4 = coord(2/5)
    
    Abstract
    Current digital preservation is strongly biased towards data objects: digital files of document-style objects, or encapsulated and largely self-contained objects. To provide authenticity and provenance information, comprehensive metadata models are deployed to document information on an object's context. Yet, we claim that simply documenting an objects context may not be sufficient to ensure proper provenance and to fulfill the stated preservation goals. Specifically in e-Science and business settings, capturing, documenting and preserving entire processes may be necessary to meet the preservation goals. We thus present an approach for capturing, documenting and preserving processes, and means to assess their authenticity upon re-execution. We will discuss options as well as limitations and open challenges to achieve sound preservation, speci?cally within scientific processes.
    Pages
    S.7-17
  4. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.11
    0.108103 = product of:
      0.540515 = sum of:
        0.540515 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
          0.540515 = score(doc=1826,freq=2.0), product of:
            0.5770437 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.06806357 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.2 = coord(1/5)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  5. Understanding metadata (2004) 0.10
    0.097490415 = product of:
      0.24372603 = sum of:
        0.16995263 = weight(_text_:objects in 2686) [ClassicSimilarity], result of:
          0.16995263 = score(doc=2686,freq=2.0), product of:
            0.36176273 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06806357 = queryNorm
            0.46979034 = fieldWeight in 2686, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0625 = fieldNorm(doc=2686)
        0.0737734 = weight(_text_:22 in 2686) [ClassicSimilarity], result of:
          0.0737734 = score(doc=2686,freq=2.0), product of:
            0.23834704 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06806357 = queryNorm
            0.30952093 = fieldWeight in 2686, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=2686)
      0.4 = coord(2/5)
    
    Abstract
    Metadata (structured information about an object or collection of objects) is increasingly important to libraries, archives, and museums. And although librarians are familiar with a number of issues that apply to creating and using metadata (e.g., authority control, controlled vocabularies, etc.), the world of metadata is nonetheless different than library cataloging, with its own set of challenges. Therefore, whether you are new to these concepts or quite experienced with classic cataloging, this short (20 pages) introductory paper on metadata can be helpful
    Date
    10. 9.2004 10:22:40
  6. Priss, U.: Faceted knowledge representation (1999) 0.09
    0.085304104 = product of:
      0.21326026 = sum of:
        0.14870855 = weight(_text_:objects in 2654) [ClassicSimilarity], result of:
          0.14870855 = score(doc=2654,freq=2.0), product of:
            0.36176273 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06806357 = queryNorm
            0.41106653 = fieldWeight in 2654, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2654)
        0.06455172 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
          0.06455172 = score(doc=2654,freq=2.0), product of:
            0.23834704 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06806357 = queryNorm
            0.2708308 = fieldWeight in 2654, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2654)
      0.4 = coord(2/5)
    
    Abstract
    Faceted Knowledge Representation provides a formalism for implementing knowledge systems. The basic notions of faceted knowledge representation are "unit", "relation", "facet" and "interpretation". Units are atomic elements and can be abstract elements or refer to external objects in an application. Relations are sequences or matrices of 0 and 1's (binary matrices). Facets are relational structures that combine units and relations. Each facet represents an aspect or viewpoint of a knowledge system. Interpretations are mappings that can be used to translate between different representations. This paper introduces the basic notions of faceted knowledge representation. The formalism is applied here to an abstract modeling of a faceted thesaurus as used in information retrieval.
    Date
    22. 1.2016 17:30:31
  7. Isaac, A.: Aligning thesauri for an integrated access to Cultural Heritage Resources (2007) 0.08
    0.08284423 = product of:
      0.20711057 = sum of:
        0.16626124 = weight(_text_:objects in 553) [ClassicSimilarity], result of:
          0.16626124 = score(doc=553,freq=10.0), product of:
            0.36176273 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06806357 = queryNorm
            0.4595864 = fieldWeight in 553, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.02734375 = fieldNorm(doc=553)
        0.04084933 = weight(_text_:7 in 553) [ClassicSimilarity], result of:
          0.04084933 = score(doc=553,freq=4.0), product of:
            0.22547886 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.06806357 = queryNorm
            0.18116702 = fieldWeight in 553, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.02734375 = fieldNorm(doc=553)
      0.4 = coord(2/5)
    
    Abstract
    Currently, a number of efforts are being carried out to integrate collections from different institutions and containing heterogeneous material. Examples of such projects are The European Library [1] and the Memory of the Netherlands [2]. A crucial point for the success of these is the availability to provide a unified access on top of the different collections, e.g. using one single vocabulary for querying or browsing the objects they contain. This is made difficult by the fact that the objects from different collections are often described using different vocabularies - thesauri, classification schemes - and are therefore not interoperable at the semantic level. To solve this problem, one can turn to semantic links - mappings - between the elements of the different vocabularies. If one knows that a concept C from a vocabulary V is semantically equivalent to a concept to a concept D from vocabulary W, then an appropriate search engine can return all the objects that were indexed against D for a query for objects described using C. We thus have an access to other collections, using a single one vocabulary. This is however an ideal situation, and hard alignment work is required to reach it. Several projects in the past have tried to implement such a solution, like MACS [3] and Renardus [4]. They have demonstrated very interesting results, but also highlighted the difficulty of aligning manually all the different vocabularies involved in practical cases, which sometimes contain hundreds of thousands of concepts. To alleviate this problem, a number of tools have been proposed in order to provide with candidate mappings between two input vocabularies, making alignment a (semi-) automatic task. Recently, the Semantic Web community has produced a lot of these alignment tools'. Several techniques are found, depending on the material they exploit: labels of concepts, structure of vocabularies, collection objects and external knowledge sources. Throughout our presentation, we will present a concrete heterogeneity case where alignment techniques have been applied to build a (pilot) browser, developed in the context of the STITCH project [5]. This browser enables a unified access to two collections of illuminated manuscripts, using the description vocabulary used in the first collection, Mandragore [6], or the one used by the second, Iconclass [7]. In our talk, we will also make the point for using unified representations the vocabulary semantic and lexical information. Additionally to ease the use of the alignment tools that have these vocabularies as input, turning to a standard representation format helps designing applications that are more generic, like the browser we demonstrate. We give pointers to SKOS [8], an open and web-enabled format currently developed by the Semantic Web community.
    References [1] http:// www.theeuropeanlibrary.org [2] http://www.geheugenvannederland.nl [3] http://macs.cenl.org [4] Day, M., Koch, T., Neuroth, H.: Searching and browsing multiple subject gateways in the Renardus service. In Proceedings of the RC33 Sixth International Conference on Social Science Methodology, Amsterdam , 2005. [5] http://stitch.cs.vu.nl [6] http://mandragore.bnf.fr [7] http://www.iconclass.nl [8] www.w3.org/2004/02/skos/ 1 The Semantic Web vision supposes sharing data using different conceptualizations (ontologies), and therefore implies to tackle the semantic interoperability problem
  8. Babcock, K.; Lee, S.; Rajakumar, J.; Wagner, A.: Providing access to digital collections (2020) 0.08
    0.07659296 = product of:
      0.1914824 = sum of:
        0.15021834 = weight(_text_:objects in 5855) [ClassicSimilarity], result of:
          0.15021834 = score(doc=5855,freq=4.0), product of:
            0.36176273 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06806357 = queryNorm
            0.41523993 = fieldWeight in 5855, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5855)
        0.041264053 = weight(_text_:7 in 5855) [ClassicSimilarity], result of:
          0.041264053 = score(doc=5855,freq=2.0), product of:
            0.22547886 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.06806357 = queryNorm
            0.18300632 = fieldWeight in 5855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5855)
      0.4 = coord(2/5)
    
    Abstract
    The University of Toronto Libraries is currently reviewing technology to support its Collections U of T service. Collections U of T provides search and browse access to 375 digital collections (and over 203,000 digital objects) at the University of Toronto Libraries. Digital objects typically include special collections material from the university as well as faculty digital collections, all with unique metadata requirements. The service is currently supported by IIIF-enabled Islandora, with one Fedora back end and multiple Drupal sites per parent collection (see attached image). Like many institutions making use of Islandora, UTL is now confronted with Drupal 7 end of life and has begun to investigate a migration path forward. This article will summarise the Collections U of T functional requirements and lessons learned from our current technology stack. It will go on to outline our research to date for alternate solutions. The article will review both emerging micro-service solutions, as well as out-of-the-box platforms, to provide an overview of the digital collection technology landscape in 2019. Note that our research is focused on reviewing technology solutions for providing access to digital collections, as preservation services are offered through other services at the University of Toronto Libraries.
  9. Jahrbuch der Auktionspreise für Bücher, Handschriften und Autographen (JAP) : Computerdatei (1997) 0.07
    0.06989794 = product of:
      0.17474484 = sum of:
        0.08252811 = weight(_text_:7 in 2967) [ClassicSimilarity], result of:
          0.08252811 = score(doc=2967,freq=2.0), product of:
            0.22547886 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.06806357 = queryNorm
            0.36601263 = fieldWeight in 2967, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.078125 = fieldNorm(doc=2967)
        0.092216745 = weight(_text_:22 in 2967) [ClassicSimilarity], result of:
          0.092216745 = score(doc=2967,freq=2.0), product of:
            0.23834704 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06806357 = queryNorm
            0.38690117 = fieldWeight in 2967, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=2967)
      0.4 = coord(2/5)
    
    Date
    13. 3.1996 21:22:40
    Isbn
    3-7762-0425-7 (Druckausg.) * 3-7762-0431-1 (CD-ROM)
  10. Baecker, D.: ¬Der Frosch, die Fliege und der Mensch : zum Tod von Humberto Maturana (2021) 0.07
    0.06989794 = product of:
      0.17474484 = sum of:
        0.08252811 = weight(_text_:7 in 236) [ClassicSimilarity], result of:
          0.08252811 = score(doc=236,freq=2.0), product of:
            0.22547886 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.06806357 = queryNorm
            0.36601263 = fieldWeight in 236, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.078125 = fieldNorm(doc=236)
        0.092216745 = weight(_text_:22 in 236) [ClassicSimilarity], result of:
          0.092216745 = score(doc=236,freq=2.0), product of:
            0.23834704 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06806357 = queryNorm
            0.38690117 = fieldWeight in 236, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=236)
      0.4 = coord(2/5)
    
    Date
    7. 5.2021 22:10:24
  11. Van de Sompel, H.; Young, J.A.; Hickey, T.B.: Using the OAI-PMH ... differently (2003) 0.06
    0.058993783 = product of:
      0.14748445 = sum of:
        0.1062204 = weight(_text_:objects in 1191) [ClassicSimilarity], result of:
          0.1062204 = score(doc=1191,freq=2.0), product of:
            0.36176273 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06806357 = queryNorm
            0.29361898 = fieldWeight in 1191, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1191)
        0.041264053 = weight(_text_:7 in 1191) [ClassicSimilarity], result of:
          0.041264053 = score(doc=1191,freq=2.0), product of:
            0.22547886 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.06806357 = queryNorm
            0.18300632 = fieldWeight in 1191, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1191)
      0.4 = coord(2/5)
    
    Abstract
    The Open Archives Initiative's Protocol for Metadata Harvesting (OAI-PMH) was created to facilitate discovery of distributed resources. The OAI-PMH achieves this by providing a simple, yet powerful framework for metadata harvesting. Harvesters can incrementally gather records contained in OAI-PMH repositories and use them to create services covering the content of several repositories. The OAI-PMH has been widely accepted, and until recently, it has mainly been applied to make Dublin Core metadata about scholarly objects contained in distributed repositories searchable through a single user interface. This article describes innovative applications of the OAI-PMH that we have introduced in recent projects. In these projects, OAI-PMH concepts such as resource and metadata format have been interpreted in novel ways. The result of doing so illustrates the usefulness of the OAI-PMH beyond the typical resource discovery using Dublin Core metadata. Also, through the inclusion of XSL1 stylesheets in protocol responses, OAI-PMH repositories have been directly overlaid with an interface that allows users to navigate the contained metadata by means of a Web browser. In addition, through the introduction of PURL2 partial redirects, complex OAI-PMH protocol requests have been turned into simple URIs that can more easily be published and used in downstream applications.
    Source
    D-Lib magazine. 9(2003) no.7/8, x S
  12. Henrich, A.: Information Retrieval : Grundlagen, Modelle und Anwendungen (2008) 0.06
    0.055918362 = product of:
      0.1397959 = sum of:
        0.06602249 = weight(_text_:7 in 1525) [ClassicSimilarity], result of:
          0.06602249 = score(doc=1525,freq=2.0), product of:
            0.22547886 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.06806357 = queryNorm
            0.2928101 = fieldWeight in 1525, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.0625 = fieldNorm(doc=1525)
        0.0737734 = weight(_text_:22 in 1525) [ClassicSimilarity], result of:
          0.0737734 = score(doc=1525,freq=2.0), product of:
            0.23834704 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06806357 = queryNorm
            0.30952093 = fieldWeight in 1525, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=1525)
      0.4 = coord(2/5)
    
    Date
    22. 8.2015 21:23:08
    Issue
    Version 1.2 (Rev: 5727, Stand: 7. Januar 2008).
  13. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.05
    0.0540515 = product of:
      0.2702575 = sum of:
        0.2702575 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
          0.2702575 = score(doc=5669,freq=2.0), product of:
            0.5770437 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.06806357 = queryNorm
            0.46834838 = fieldWeight in 5669, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5669)
      0.2 = coord(1/5)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  14. Zia, L.L.: Growing a national learning environments and resources network for science, mathematics, engineering, and technology education : current issues and opportunities for the NSDL program (2001) 0.05
    0.052664507 = product of:
      0.13166127 = sum of:
        0.084976315 = weight(_text_:objects in 1217) [ClassicSimilarity], result of:
          0.084976315 = score(doc=1217,freq=2.0), product of:
            0.36176273 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06806357 = queryNorm
            0.23489517 = fieldWeight in 1217, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.03125 = fieldNorm(doc=1217)
        0.046684954 = weight(_text_:7 in 1217) [ClassicSimilarity], result of:
          0.046684954 = score(doc=1217,freq=4.0), product of:
            0.22547886 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.06806357 = queryNorm
            0.20704803 = fieldWeight in 1217, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.03125 = fieldNorm(doc=1217)
      0.4 = coord(2/5)
    
    Abstract
    The National Science Foundation's (NSF) National Science, Mathematics, Engineering, and Technology Education Digital Library (NSDL) program seeks to create, develop, and sustain a national digital library supporting science, mathematics, engineering, and technology (SMET) education at all levels -- preK-12, undergraduate, graduate, and life-long learning. The resulting virtual institution is expected to catalyze and support continual improvements in the quality of science, mathematics, engineering, and technology (SMET) education in both formal and informal settings. The vision for this program has been explored through a series of workshops over the past several years and documented in accompanying reports and monographs. (See [1-7, 10, 12, and 13].) These efforts have led to a characterization of the digital library as a learning environments and resources network for science, mathematics, engineering, and technology education, that is: * designed to meet the needs of learners, in both individual and collaborative settings; * constructed to enable dynamic use of a broad array of materials for learning primarily in digital format; and * managed actively to promote reliable anytime, anywhere access to quality collections and services, available both within and without the network. Underlying the NSDL program are several working assumptions. First, while there is currently no lack of "great piles of content" on the Web, there is an urgent need for "piles of great content". The difficulties in discovering and verifying the authority of appropriate Web-based material are certainly well known, yet there are many examples of learning resources of great promise available (particularly those exploiting the power of multiple media), with more added every day. The breadth and interconnectedness of the Web are simultaneously a great strength and shortcoming. Second, the "unit" or granularity of educational content can and will shrink, affording the opportunity for users to become creators and vice versa, as learning objects are reused, repackaged, and repurposed. To be sure, this scenario cannot take place without serious attention to intellectual property and digital rights management concerns. But new models and technologies are being explored (see a number of recent articles in the January issue of D-Lib Magazine). Third, there is a need for an "organizational infrastructure" that facilitates connections between distributed users and distributed content, as alluded to in the third bullet above. Finally, while much of the ongoing use of the library is envisioned to be "free" in the sense of the public good, there is an opportunity and a need to consider multiple alternative models of sustainability, particularly in the area of services offered by the digital library. More details about the NSDL program including information about proposal deadlines and current awards may be found at <http://www.ehr.nsf.gov/ehr/due/programs/nsdl>.
    Source
    D-Lib magazine. 7(2001) no.3, xx S
  15. Deutsche Nationalbibliographie : CD-ROM 1972-1985 (1997) 0.05
    0.04892856 = product of:
      0.1223214 = sum of:
        0.057769675 = weight(_text_:7 in 4748) [ClassicSimilarity], result of:
          0.057769675 = score(doc=4748,freq=2.0), product of:
            0.22547886 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.06806357 = queryNorm
            0.25620884 = fieldWeight in 4748, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4748)
        0.06455172 = weight(_text_:22 in 4748) [ClassicSimilarity], result of:
          0.06455172 = score(doc=4748,freq=2.0), product of:
            0.23834704 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06806357 = queryNorm
            0.2708308 = fieldWeight in 4748, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4748)
      0.4 = coord(2/5)
    
    Date
    7. 5.2000 10:38:22
  16. Roth, G.; Gerhardt, V.; Flaßpöhler, S.: Wie flexibel ist mein Ich? : Dialog (2012) 0.05
    0.04892856 = product of:
      0.1223214 = sum of:
        0.057769675 = weight(_text_:7 in 955) [ClassicSimilarity], result of:
          0.057769675 = score(doc=955,freq=2.0), product of:
            0.22547886 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.06806357 = queryNorm
            0.25620884 = fieldWeight in 955, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.0546875 = fieldNorm(doc=955)
        0.06455172 = weight(_text_:22 in 955) [ClassicSimilarity], result of:
          0.06455172 = score(doc=955,freq=2.0), product of:
            0.23834704 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06806357 = queryNorm
            0.2708308 = fieldWeight in 955, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=955)
      0.4 = coord(2/5)
    
    Date
    7. 5.2023 11:14:22
  17. Gladney, H.M.; Bennett, J.L.: What do we mean by authentic? : what's the real McCoy? (2003) 0.05
    0.047195025 = product of:
      0.11798756 = sum of:
        0.084976315 = weight(_text_:objects in 1201) [ClassicSimilarity], result of:
          0.084976315 = score(doc=1201,freq=2.0), product of:
            0.36176273 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06806357 = queryNorm
            0.23489517 = fieldWeight in 1201, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.03125 = fieldNorm(doc=1201)
        0.033011246 = weight(_text_:7 in 1201) [ClassicSimilarity], result of:
          0.033011246 = score(doc=1201,freq=2.0), product of:
            0.22547886 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.06806357 = queryNorm
            0.14640506 = fieldWeight in 1201, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.03125 = fieldNorm(doc=1201)
      0.4 = coord(2/5)
    
    Abstract
    Authenticity is among digital document security properties needing attention. Literature focused on preservation reveals uncertainty - even confusion - about what we might mean by authentic. The current article provides a definition that spans vernacular usage of "authentic", ranging from digital documents through material artifacts to natural objects. We accomplish this by modeling entity transmission through time and space by signal sequences and object representations at way stations, and by carefully distinguishing objective facts from subjective values and opinions. Our model can be used to clarify other words that denote information quality, such as "evidence", "essential", and "useful". Digital documents are becoming important in most kinds of human activity. Whenever we buy something valuable, agree to a contract, design and build a machine, or provide a service, we should understand exactly what we intend and be ready to describe this as accurately as the occasion demands. This makes worthwhile whatever care is needed to devise definitions that are sufficiently precise and distinct from each other to explain what we are doing and to minimize community confusion. When we set out, some months ago, to describe answers to the open technical challenges of digital preservation, we took for granted the existence of a broad, unambiguous definition for authentic. Document authenticity is of fundamental importance not only for scholarly work, but also for practical affairs, including legal matters, regulatory requirements, military and other governmental information, and financial transactions. Trust, and evidence for deciding what can be trusted as authentic are considered in many works about digital preservation. These topics are broad, deep, and subtle, raising many questions. Among these, the current work addresses a single question, "What is a useful meaning of authentic or of authenticity for digital documents - a meaning that is not itself a source of confusion?" Progress in managing digital information would be hampered without a clear answer that is sufficiently objective to guide the evaluation of communication and computing technology. Our approach to constructing an answer to this question is to break each object transmission into pieces whose treatment we can describe explicitly and with attention to potential imperfections.
    Source
    D-Lib magazine. 9(2003) no.7/8, x S
  18. Thaller, M.: From the digitized to the digital library (2001) 0.05
    0.045955773 = product of:
      0.114889435 = sum of:
        0.090131 = weight(_text_:objects in 1159) [ClassicSimilarity], result of:
          0.090131 = score(doc=1159,freq=4.0), product of:
            0.36176273 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06806357 = queryNorm
            0.24914396 = fieldWeight in 1159, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1159)
        0.024758434 = weight(_text_:7 in 1159) [ClassicSimilarity], result of:
          0.024758434 = score(doc=1159,freq=2.0), product of:
            0.22547886 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.06806357 = queryNorm
            0.109803796 = fieldWeight in 1159, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1159)
      0.4 = coord(2/5)
    
    Content
    Theses: 1. Who should be addressed by digital libraries? How shall we measure whether we have reached the desired audience? Thesis: The primary audience for a digital library is neither the leading specialist in the respective field, nor the freshman, but the advanced student or young researcher and the "almost specialist". The primary topic of digitization projects should not be the absolute top range of the "treasures" of a collection, but those materials that we always have wanted to promote if they were just marginally more important. Whether we effectively serve them to the appropriate community of serious users can only be measured according to criteria that have yet to be developed. 2. The appropriate size of digital libraries and their access tools Thesis: Digital collections need a critical, minimal size to make their access worthwhile. In the end, users want to access information, not metadata or gimmicks. 3. The quality of digital objects Thesis: If digital library resources are to be integrated into the daily work of the research community, they must appear on the screen of the researcher in a quality that is useful in actual work. 4. The granularity / modularity of digital repositories Thesis: While digital libraries are self-contained bodies of information, they are not the basic unit that most users want to access. Users are, as a rule, more interested in the individual objects in the library and need a straightforward way to access them. 5. Digital collections as integrated reference systems Thesis: Traditional libraries support their collections with reference material. Digital collections need to find appropriate models to replicate this functionality. 6. Library and teaching Thesis: The use of multimedia in teaching is as much of a current buzzword as the creation of digital collections. It is obvious that they should be connected. A clear-cut separation of the two approaches is nevertheless necessary.
    Source
    D-Lib magazine. 7(2001) no.2, xx S
  19. Lagoze, C.: Keeping Dublin Core simple : Cross-domain discovery or resource description? (2001) 0.05
    0.04504864 = product of:
      0.1126216 = sum of:
        0.09198957 = weight(_text_:objects in 1216) [ClassicSimilarity], result of:
          0.09198957 = score(doc=1216,freq=6.0), product of:
            0.36176273 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06806357 = queryNorm
            0.2542815 = fieldWeight in 1216, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1216)
        0.020632027 = weight(_text_:7 in 1216) [ClassicSimilarity], result of:
          0.020632027 = score(doc=1216,freq=2.0), product of:
            0.22547886 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.06806357 = queryNorm
            0.09150316 = fieldWeight in 1216, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1216)
      0.4 = coord(2/5)
    
    Abstract
    Reality is messy. Individuals perceive or define objects differently. Objects may change over time, morphing into new versions of their former selves or into things altogether different. A book can give rise to a translation, derivation, or edition, and these resulting objects are related in complex ways to each other and to the people and contexts in which they were created or transformed. Providing a normalized view of such a messy reality is a precondition for managing information. From the first library catalogs, through Melvil Dewey's Decimal Classification system in the nineteenth century, to today's MARC encoding of AACR2 cataloging rules, libraries have epitomized the process of what David Levy calls "order making", whereby catalogers impose a veneer of regularity on the natural disorder of the artifacts they encounter. The pre-digital library within which the Catalog and its standards evolved was relatively self-contained and controlled. Creating and maintaining catalog records was, and still is, the task of professionals. Today's Web, in contrast, has brought together a diversity of information management communities, with a variety of order-making standards, into what Stuart Weibel has called the Internet Commons. The sheer scale of this context has motivated a search for new ways to describe and index information. Second-generation search engines such as Google can yield astonishingly good search results, while tools such as ResearchIndex for automatic citation indexing and techniques for inferring "Web communities" from constellations of hyperlinks promise even better methods for focusing queries on information from authoritative sources. Such "automated digital libraries," according to Bill Arms, promise to radically reduce the cost of managing information. Alongside the development of such automated methods, there is increasing interest in metadata as a means of imposing pre-defined order on Web content. While the size and changeability of the Web makes professional cataloging impractical, a minimal amount of information ordering, such as that represented by the Dublin Core (DC), may vastly improve the quality of an automatic index at low cost; indeed, recent work suggests that some types of simple description may be generated with little or no human intervention.
    Source
    D-Lib magazine. 7(2001) no.1, xx S
  20. Faceted classification of information (o.J.) 0.04
    0.04248816 = product of:
      0.2124408 = sum of:
        0.2124408 = weight(_text_:objects in 2653) [ClassicSimilarity], result of:
          0.2124408 = score(doc=2653,freq=2.0), product of:
            0.36176273 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06806357 = queryNorm
            0.58723795 = fieldWeight in 2653, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.078125 = fieldNorm(doc=2653)
      0.2 = coord(1/5)
    
    Abstract
    An explanation of faceted classification meant for people working in knowledge management. An example given for a high-technology company has the fundamental categories Products, Applications, Organizations, People, Domain objects ("technologies applied in the marketplace in which the organization participates"), Events (i.e. time), and Publications.

Authors

Years

Languages

  • e 227
  • d 151
  • i 5
  • el 2
  • f 2
  • a 1
  • nl 1
  • More… Less…

Types

  • a 191
  • i 30
  • m 16
  • r 7
  • s 7
  • b 6
  • n 3
  • x 3
  • l 1
  • More… Less…