Search (8 results, page 1 of 1)

  • × author_ss:"Becker, C."
  1. Becker, C.; Rauber, A.: Decision criteria in digital preservation : what to measure and how (2011) 0.00
    0.0033183135 = product of:
      0.0099549405 = sum of:
        0.0099549405 = weight(_text_:a in 4456) [ClassicSimilarity], result of:
          0.0099549405 = score(doc=4456,freq=18.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.19109234 = fieldWeight in 4456, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4456)
      0.33333334 = coord(1/3)
    
    Abstract
    The enormous amount of valuable information that is produced today and needs to be made available over the long-term has led to increased efforts in scalable, automated solutions for long-term digital preservation. The mission of preservation planning is to define the optimal actions to ensure future access to digital content and react to changes that require adjustments in repository operations. Considerable effort has been spent in the past on defining, implementing, and validating a framework and system for preservation planning. This article sheds light on the actual decision criteria and influence factors to be considered when choosing digital preservation actions. It is based on an extensive evaluation of case studies on preservation planning for a range of different types of objects with partners from different institutional backgrounds. We categorize decision criteria from a number of real-world decision-making instances in a taxonomy. We show that a majority of the criteria can be evaluated by applying automated measurements under realistic conditions, and demonstrate that controlled experimentation and automated measurements can be used to substantially improve repeatability of decisions and reduce the effort needed to evaluate preservation components. The presented measurement framework enables scalable preservation and monitoring and supports trust in preservation decisions because extensive evidence is produced in a reproducible, automated way and documented as the basis of decision making in a standardized form.
    Type
    a
  2. Duretec, K.; Becker, C.: Format technology lifecycle analysis (2017) 0.00
    0.00325127 = product of:
      0.009753809 = sum of:
        0.009753809 = weight(_text_:a in 3836) [ClassicSimilarity], result of:
          0.009753809 = score(doc=3836,freq=12.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.18723148 = fieldWeight in 3836, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3836)
      0.33333334 = coord(1/3)
    
    Abstract
    The lifecycles of format technology have been a defining concern for digital stewardship research and practice. However, little evidence exists to provide robust methods for assessing the state of any given format technology and describing its evolution over time. This article introduces relevant models from diffusion theory and market research and presents a replicable analysis method to compute models of technology evolution. Data cleansing and the combination of multiple data sources enable the application of nonlinear regression to estimate the parameters of the Bass diffusion model on format technology market lifecycles. Through its application to a longitudinal data set from the UK Web Archive, we demonstrate that the method produces reliable results and show that the Bass model can be used to describe format lifecycles. By analyzing adoption patterns across market segments, new insights are inferred about how the diffusion of formats and products such as applications occurs over time. The analysis provides a stepping stone to a more robust and evidence-based approach to model technology evolution.
    Type
    a
  3. Becker, C.; Bizer, C.: DBpedia Mobile : a location-aware Semantic Web client 0.00
    0.003128536 = product of:
      0.009385608 = sum of:
        0.009385608 = weight(_text_:a in 2478) [ClassicSimilarity], result of:
          0.009385608 = score(doc=2478,freq=16.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.18016359 = fieldWeight in 2478, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2478)
      0.33333334 = coord(1/3)
    
    Abstract
    DBpedia Mobile is a location-aware client for the Semantic Web that can be used on an iPhone and other mobile devices. Based on the current GPS position of a mobile device, DBpedia Mobile renders a map indicating nearby locations from the DBpedia dataset. Starting from this map, the user can explore background information about his surroundings by navigating along data links into otherWeb data sources. DBpedia Mobile has been designed for the use case of a tourist exploring a city. As the application is not restricted to a fixed set of data sources but can retrieve and display data from arbitrary Web data sources, DBpedia Mobile can also be employed within other use cases, including ones unforeseen by its developers. Besides accessing Web data, DBpedia Mobile also enables users to publish their current location, pictures and reviews to the Semantic Web so that they can be used by other Semantic Web applications. Instead of simply being tagged with geographical coordinates, published content is interlinked with a nearby DBpedia resource and thus contributes to the overall richness of the Geospatial Semantic Web.
  4. Bizer, C.; Lehmann, J.; Kobilarov, G.; Auer, S.; Becker, C.; Cyganiak, R.; Hellmann, S.: DBpedia: a crystallization point for the Web of Data. (2009) 0.00
    0.0027093915 = product of:
      0.008128175 = sum of:
        0.008128175 = weight(_text_:a in 1643) [ClassicSimilarity], result of:
          0.008128175 = score(doc=1643,freq=12.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.15602624 = fieldWeight in 1643, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1643)
      0.33333334 = coord(1/3)
    
    Abstract
    The DBpedia project is a community effort to extract structured information from Wikipedia and to make this information accessible on the Web. The resulting DBpedia knowledge base currently describes over 2.6 million entities. For each of these entities, DBpedia defines a globally unique identifier that can be dereferenced over the Web into a rich RDF description of the entity, including human-readable definitions in 30 languages, relationships to other resources, classifications in four concept hierarchies, various facts as well as data-level links to other Web data sources describing the entity. Over the last year, an increasing number of data publishers have begun to set data-level links to DBpedia resources, making DBpedia a central interlinking hub for the emerging Web of data. Currently, the Web of interlinked data sources around DBpedia provides approximately 4.7 billion pieces of information and covers domains suc as geographic information, people, companies, films, music, genes, drugs, books, and scientific publications. This article describes the extraction of the DBpedia knowledge base, the current status of interlinking DBpedia with other data sources on the Web, and gives an overview of applications that facilitate the Web of Data around DBpedia.
    Type
    a
  5. Becker, C.: Betriebliches Informationsverhalten im Online-Bereich (1997) 0.00
    0.002654651 = product of:
      0.007963953 = sum of:
        0.007963953 = weight(_text_:a in 714) [ClassicSimilarity], result of:
          0.007963953 = score(doc=714,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.15287387 = fieldWeight in 714, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=714)
      0.33333334 = coord(1/3)
    
    Type
    a
  6. Maemura, E.; Worby, N.; Milligan, I.; Becker, C.: If these crawls could talk : studying and documenting web archives provenance (2018) 0.00
    0.002473325 = product of:
      0.0074199745 = sum of:
        0.0074199745 = weight(_text_:a in 4465) [ClassicSimilarity], result of:
          0.0074199745 = score(doc=4465,freq=10.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.14243183 = fieldWeight in 4465, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4465)
      0.33333334 = coord(1/3)
    
    Abstract
    The increasing use and prominence of web archives raises the urgency of establishing mechanisms for transparency in the making of web archives to facilitate the process of evaluating a web archive's provenance, scoping, and absences. Some choices and process events are captured automatically, but their interactions are not currently well understood or documented. This study examined the decision space of web archives and its role in shaping what is and what is not captured in the web archiving process. By comparing how three different web archives collections were created and documented, we investigate how curatorial decisions interact with technical and external factors and we compare commonalities and differences. The findings reveal the need to understand both the social and technical context that shapes those decisions and the ways in which these individual decisions interact. Based on the study, we propose a framework for documenting key dimensions of a collection that addresses the situated nature of the organizational context, technical specificities, and unique characteristics of web materials that are the focus of a collection. The framework enables future researchers to undertake empirical work studying the process of creating web archives collections in different contexts.
    Type
    a
  7. Maemura, E.; Moles, N.; Becker, C.: Organizational assessment frameworks for digital preservation : a literature review and mapping (2017) 0.00
    0.002212209 = product of:
      0.0066366266 = sum of:
        0.0066366266 = weight(_text_:a in 3743) [ClassicSimilarity], result of:
          0.0066366266 = score(doc=3743,freq=8.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.12739488 = fieldWeight in 3743, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3743)
      0.33333334 = coord(1/3)
    
    Abstract
    As the field of digital preservation (DP) matures, there is an increasing need to systematically assess an organization's abilities to achieve its digital preservation goals, and a wide variety of assessment tools have been created for this purpose. This article aims to map the landscape of research in this area, evaluate the current maturity of knowledge on this central question in DP and provide direction for future research. To do so, this paper reviews assessment frameworks in digital preservation through a systematic literature search and categorizes the literature by type of research. The analysis shows that publication output around assessment in digital preservation has increased markedly over time, but most existing work focuses on developing new models rather than rigorous evaluation and validation of existing frameworks. Significant gaps are present in the application of robust conceptual foundations and design methods, and in the level of empirical evidence available to enable the evaluation and validation of assessment models. The analysis and comparison with other fields suggest that the design of assessment models in DP should be studied rigorously in both theory and practice, and that the development of future models will benefit from applying existing methods, processes, and principles for model design.
    Type
    a
  8. Becker, C.; Maemura, E.; Moles, N.: ¬The design and use of assessment frameworks in digital curation (2020) 0.00
    0.0019158293 = product of:
      0.005747488 = sum of:
        0.005747488 = weight(_text_:a in 5508) [ClassicSimilarity], result of:
          0.005747488 = score(doc=5508,freq=6.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.11032722 = fieldWeight in 5508, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5508)
      0.33333334 = coord(1/3)
    
    Abstract
    To understand and improve their current abilities and maturity, organizations use diagnostic instruments such as maturity models and other assessment frameworks. Increasing numbers of these are being developed in digital curation. Their central role in strategic decision making raises the need to evaluate their fitness for this purpose and develop guidelines for their design and evaluation. A comprehensive review of assessment frameworks, however, found little evidence that existing assessment frameworks have been evaluated systematically, and no methods for their evaluation. This article proposes a new methodology for evaluating the design and use of assessment frameworks. It builds on prior research on maturity models and combines analytic and empirical evaluation methods to explain how the design of assessment frameworks influences their application in practice, and how the design process can effectively take this into account. We present the evaluation methodology and its application to two frameworks. The evaluation results lead to guidelines for the design process of assessment frameworks in digital curation. The methodology provides insights to the designers of the evaluated frameworks that they can consider in future revisions; methodical guidance for researchers in the field; and practical insights and words of caution to organizations keen on diagnosing their abilities.
    Type
    a