Search (86 results, page 1 of 5)

  • × year_i:[2010 TO 2020}
  • × type_ss:"el"
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.04
    0.044288896 = product of:
      0.13286668 = sum of:
        0.13286668 = product of:
          0.39860004 = sum of:
            0.39860004 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.39860004 = score(doc=1826,freq=2.0), product of:
                0.42553797 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05019314 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Academic publishing : No peeking (2014) 0.04
    0.043680202 = product of:
      0.1310406 = sum of:
        0.1310406 = product of:
          0.2620812 = sum of:
            0.2620812 = weight(_text_:publishing in 805) [ClassicSimilarity], result of:
              0.2620812 = score(doc=805,freq=4.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                1.0687344 = fieldWeight in 805, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.109375 = fieldNorm(doc=805)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    A publishing giant goes after the authors of its journals' papers
  3. Open MIND (2015) 0.04
    0.039587595 = product of:
      0.059381388 = sum of:
        0.042380195 = weight(_text_:electronic in 1648) [ClassicSimilarity], result of:
          0.042380195 = score(doc=1648,freq=2.0), product of:
            0.19623034 = queryWeight, product of:
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.05019314 = queryNorm
            0.21597168 = fieldWeight in 1648, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1648)
        0.017001195 = product of:
          0.03400239 = sum of:
            0.03400239 = weight(_text_:22 in 1648) [ClassicSimilarity], result of:
              0.03400239 = score(doc=1648,freq=2.0), product of:
                0.17576782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05019314 = queryNorm
                0.19345059 = fieldWeight in 1648, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1648)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This is an edited collection of 39 original papers and as many commentaries and replies. The target papers and replies were written by senior members of the MIND Group, while all commentaries were written by junior group members. All papers and commentaries have undergone a rigorous process of anonymous peer review, during which the junior members of the MIND Group acted as reviewers. The final versions of all the target articles, commentaries and replies have undergone additional editorial review. Besides offering a cross-section of ongoing, cutting-edge research in philosophy and cognitive science, this collection is also intended to be a free electronic resource for teaching. It therefore also contains a selection of online supporting materials, pointers to video and audio files and to additional free material supplied by the 92 authors represented in this volume. We will add more multimedia material, a searchable literature database, and tools to work with the online version in the future. All contributions to this collection are strictly open access. They can be downloaded, printed, and reproduced by anyone.
    Date
    27. 1.2015 11:48:22
  4. Ginther, C.; Lackner, K.: Predatory Publishing : Herausforderung für Wissenschaftler/innen und Bibliotheken (2019) 0.03
    0.029599054 = product of:
      0.08879716 = sum of:
        0.08879716 = product of:
          0.17759432 = sum of:
            0.17759432 = weight(_text_:publishing in 5330) [ClassicSimilarity], result of:
              0.17759432 = score(doc=5330,freq=10.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.72420746 = fieldWeight in 5330, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5330)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Predatory Publishing ist seit der umfangreichen internationalen Medienberichterstattung im Sommer 2018 auch der breiten Öffentlichkeit ein Begriff. Zeitschriften, Radio und Fernsehen in zahlreichen Ländern, darunter auch im deutschen Sprachraum, berichteten über mehrere Wochen ausführlich zu diesen betrügerischen Geschäftspraktiken. Das Problem ist in Fachkreisen jedoch bereits seit einigen Jahren bekannt und nimmt seither immer stärker zu. Die Publikationsservices an der Universität Graz beraten und informieren seit 2017 die Wissenschaftlerinnen und Wissenschaftler, aber auch die Studierenden zum Thema Predatory Publishing. Der folgende Beitrag bietet bietet im ersten Abschnitt wesentliche Informationen zu Predatory Publishing sowie damit in Zusammenhang stehend, auch die im Zuge der Medienkampagne 2018 kolportierten Themen Fake Science und Fake News, und wendet sich in den folgenden zwei Abschnitten der Praxis zu, wenn es zum einen um die Grundlagen der Auseinandersetzung mit Predatory Publishing an Universitäten geht und zum anderen die Aufklärungsarbeit und Services an der Universität Graz durch Mitarbeiter/innen der Universitätsbibliothek als Fallbeispiel aus der Praxis vorgestellt werden.
  5. Buranyi, S.: Is the staggeringly profitable business of scientific publishing bad for science? (2017) 0.03
    0.026748553 = product of:
      0.08024566 = sum of:
        0.08024566 = product of:
          0.16049132 = sum of:
            0.16049132 = weight(_text_:publishing in 3711) [ClassicSimilarity], result of:
              0.16049132 = score(doc=3711,freq=6.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.6544635 = fieldWeight in 3711, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3711)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    It is an industry like no other, with profit margins to rival Google - and it was created by one of Britain's most notorious tycoons: Robert Maxwell. "Even scientists who are fighting for reform are often not aware of the roots of the system: how, in the boom years after the second world war, entrepreneurs built fortunes by taking publishing out of the hands of scientists and expanding the business on a previously unimaginable scale. And no one was more transformative and ingenious than Robert Maxwell, who turned scientific journals into a spectacular money-making machine that bankrolled his rise in British society."
    Source
    https://www.theguardian.com/science/2017/jun/27/profitable-business-scientific-publishing-bad-for-science
  6. Baier Benninger, P.: Model requirements for the management of electronic records (MoReq2) : Anleitung zur Umsetzung (2011) 0.02
    0.02397386 = product of:
      0.07192158 = sum of:
        0.07192158 = weight(_text_:electronic in 4343) [ClassicSimilarity], result of:
          0.07192158 = score(doc=4343,freq=4.0), product of:
            0.19623034 = queryWeight, product of:
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.05019314 = queryNorm
            0.3665161 = fieldWeight in 4343, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.046875 = fieldNorm(doc=4343)
      0.33333334 = coord(1/3)
    
    Abstract
    Viele auch kleinere Unternehmen, Verwaltungen und Organisationen sind angesichts eines wachsenden Berges von digitalen Informationen mit dem Ordnen und Strukturieren ihrer Ablagen beschäftigt. In den meisten Organisationen besteht ein Konzept der Dokumentenlenkung. Records Management verfolgt vor allem in zwei Punkten einen weiterführenden Ansatz. Zum einen stellt es über den Geschäftsalltag hinaus den Kontext und den Entstehungszusammenhang ins Zentrum und zum anderen gibt es Regeln vor, wie mit ungenutzten oder inaktiven Dokumenten zu verfahren ist. Mit den «Model Requirements for the Management of Electronic Records» - MoReq - wurde von der europäischen Kommission ein Standard geschaffen, der alle Kernbereiche des Records Managements und damit den gesamten Entstehungs-, Nutzungs-, Archivierungsund Aussonderungsbereich von Dokumenten abdeckt. In der «Anleitung zur Umsetzung» wird die umfangreiche Anforderungsliste von MoReq2 (August 2008) zusammengefasst und durch erklärende Abschnitte ergänzt, mit dem Ziel, als griffiges Instrument bei der Einführung eines Record Management Systems zu dienen.
  7. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.02
    0.022144448 = product of:
      0.06643334 = sum of:
        0.06643334 = product of:
          0.19930002 = sum of:
            0.19930002 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.19930002 = score(doc=4388,freq=2.0), product of:
                0.42553797 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05019314 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  8. Bandholtz, T.; Schulte-Coerne, T.; Glaser, R.; Fock, J.; Keller, T.: iQvoc - open source SKOS(XL) maintenance and publishing tool (2010) 0.02
    0.021840101 = product of:
      0.0655203 = sum of:
        0.0655203 = product of:
          0.1310406 = sum of:
            0.1310406 = weight(_text_:publishing in 604) [ClassicSimilarity], result of:
              0.1310406 = score(doc=604,freq=4.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.5343672 = fieldWeight in 604, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=604)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    iQvoc is a new open source SKOS-XL vocabulary management tool developed by the Federal Environment Agency, Germany, and innoQ Deutschland GmbH. Its immediate purpose is maintaining and publishing reference vocabularies in the upcoming Linked Data cloud of environmental information, but it may be easily adapted to host any SKOS- XL compliant vocabulary. iQvoc is implemented as a Ruby on Rails application running on top of JRuby - the Java implementation of the Ruby Programming Language. To increase the user experience when editing content, iQvoc uses heavily the JavaScript library jQuery.
  9. Cossham, A.F.: Models of the bibliographic universe (2017) 0.02
    0.019777425 = product of:
      0.059332274 = sum of:
        0.059332274 = weight(_text_:electronic in 3817) [ClassicSimilarity], result of:
          0.059332274 = score(doc=3817,freq=2.0), product of:
            0.19623034 = queryWeight, product of:
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.05019314 = queryNorm
            0.30236036 = fieldWeight in 3817, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3817)
      0.33333334 = coord(1/3)
    
    Abstract
    What kinds of mental models do library catalogue users have of the bibliographic universe in an age of online and electronic information? Using phenomenography and grounded analysis, it identifies participants' understanding, experience, and conceptualisation of the bibliographic universe, and identifies their expectations when using library catalogues. It contrasts participants' mental models with existing LIS models, and explores the nature of the bibliographic universe. The bibliographic universe can be considered to be a social object that exists because it is inscribed in catalogue records, cataloguing codes, bibliographies, and other bibliographic tools. It is a socially constituted phenomenon.
  10. Clark, J.A.; Young, S.W.H.: Building a better book in the browser : using Semantic Web technologies and HTML5 (2015) 0.02
    0.018720087 = product of:
      0.056160256 = sum of:
        0.056160256 = product of:
          0.11232051 = sum of:
            0.11232051 = weight(_text_:publishing in 2116) [ClassicSimilarity], result of:
              0.11232051 = score(doc=2116,freq=4.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.45802903 = fieldWeight in 2116, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2116)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The library as place and service continues to be shaped by the legacy of the book. The book itself has evolved in recent years, with various technologies vying to become the next dominant book form. In this article, we discuss the design and development of our prototype software from Montana State University (MSU) Library for presenting books inside of web browsers. The article outlines the contextual background and technological potential for publishing traditional book content through the web using open standards. Our prototype demonstrates the application of HTML5, structured data with RDFa and Schema.org markup, linked data components using JSON-LD, and an API-driven data model. We examine how this open web model impacts discovery, reading analytics, eBook production, and machine-readability for libraries considering how to unite software development and publishing.
  11. Mäkelä, E.; Hyvönen, E.; Ruotsalo, T.: How to deal with massively heterogeneous cultural heritage data : lessons learned in CultureSampo (2012) 0.02
    0.018720087 = product of:
      0.056160256 = sum of:
        0.056160256 = product of:
          0.11232051 = sum of:
            0.11232051 = weight(_text_:publishing in 3263) [ClassicSimilarity], result of:
              0.11232051 = score(doc=3263,freq=4.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.45802903 = fieldWeight in 3263, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3263)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper presents the CultureSampo system for publishing heterogeneous linked data as a service. Discussed are the problems of converting legacy data into linked data, as well as the challenge of making the massively heterogeneous yet interlinked cultural heritage content interoperable on a semantic level. Novel user interface concepts for then utilizing the content are also presented. In the approach described, the data is published not only for human use, but also as intelligent services for other computer systems that can then provide interfaces of their own for the linked data. As a concrete use case of using CultureSampo as a service, the BookSampo system for publishing Finnish fiction literature on the semantic web is presented.
  12. Celik, I.; Abel, F.; Siehndel, P.: Adaptive faceted search on Twitter (2011) 0.02
    0.017649466 = product of:
      0.052948397 = sum of:
        0.052948397 = product of:
          0.10589679 = sum of:
            0.10589679 = weight(_text_:publishing in 2221) [ClassicSimilarity], result of:
              0.10589679 = score(doc=2221,freq=2.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.4318339 = fieldWeight in 2221, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2221)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In the last few years, Twitter has become a powerful tool for publishing and discussing information. Yet, content exploration in Twitter requires substantial efforts and users often have to scan information streams by hand. In this paper, we approach this problem by means of faceted search. We propose strategies for inferring facets and facet values on Twitter by enriching the semantics of individual Twitter messages and present di erent methods, including personalized and context-adaptive methods, for making faceted search on Twitter more effective.
  13. Lange, C.; Mossakowski, T.; Galinski, C.; Kutz, O.: Making heterogeneous ontologies interoperable through standardisation : a Meta Ontology Language to be standardised: Ontology Integration and Interoperability (OntoIOp) (2011) 0.02
    0.016952079 = product of:
      0.050856233 = sum of:
        0.050856233 = weight(_text_:electronic in 50) [ClassicSimilarity], result of:
          0.050856233 = score(doc=50,freq=2.0), product of:
            0.19623034 = queryWeight, product of:
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.05019314 = queryNorm
            0.259166 = fieldWeight in 50, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.046875 = fieldNorm(doc=50)
      0.33333334 = coord(1/3)
    
    Abstract
    Assistive technology, especially for persons with disabilities, increasingly relies on electronic communication among users, between users and their devices, and among these devices. Making such ICT accessible and inclusive often requires remedial programming, which tends to be costly or even impossible. We, therefore, aim at more interoperable devices, services accessing these devices, and content delivered by these services, at the levels of 1. data and metadata, 2. datamodels and data modelling methods and 3. metamodels as well as a meta ontology language. Even though ontologies are widely being used to enable content interoperability, there is currently no unified framework for ontology interoperability itself. This paper outlines the design considerations underlying OntoIOp (Ontology Integration and Interoperability), a new standardisation activity in ISO/TC 37/SC 3 to become an international standard, which aims at filling this gap.
  14. Wolchover, N.: Wie ein Aufsehen erregender Beweis kaum Beachtung fand (2017) 0.02
    0.01602888 = product of:
      0.048086636 = sum of:
        0.048086636 = product of:
          0.09617327 = sum of:
            0.09617327 = weight(_text_:22 in 3582) [ClassicSimilarity], result of:
              0.09617327 = score(doc=3582,freq=4.0), product of:
                0.17576782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05019314 = queryNorm
                0.54716086 = fieldWeight in 3582, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3582)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 4.2017 10:42:05
    22. 4.2017 10:48:38
  15. Hafner, R.; Schelling, B.: Automatisierung der Sacherschließung mit Semantic Web Technologie (2015) 0.02
    0.01586778 = product of:
      0.047603343 = sum of:
        0.047603343 = product of:
          0.095206685 = sum of:
            0.095206685 = weight(_text_:22 in 8365) [ClassicSimilarity], result of:
              0.095206685 = score(doc=8365,freq=2.0), product of:
                0.17576782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05019314 = queryNorm
                0.5416616 = fieldWeight in 8365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=8365)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 6.2015 16:08:38
  16. Hyvönen, E.; Leskinen, P.; Tamper, M.; Keravuori, K.; Rantala, H.; Ikkala, E.; Tuominen, J.: BiographySampo - publishing and enriching biographies on the Semantic Web for digital humanities research (2019) 0.02
    0.015600072 = product of:
      0.046800215 = sum of:
        0.046800215 = product of:
          0.09360043 = sum of:
            0.09360043 = weight(_text_:publishing in 5799) [ClassicSimilarity], result of:
              0.09360043 = score(doc=5799,freq=4.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.38169086 = fieldWeight in 5799, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5799)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper argues for making a paradigm shift in publishing and using biographical dictionaries on the web, based on Linked Data. The idea is to provide the user with enhanced reading experience of biographies by enriching contents with data linking and reasoning. In addition, versatile tooling for 1) biographical research of individual persons as well as for 2) prosopographical research on groups of people are provided. To demonstrate and evaluate the new possibilities,we present the semantic portal "BiographySampo - Finnish Biographies on theSemantic Web". The system is based on a knowledge graph extracted automatically from a collection of 13.100 textual biographies, enriched with data linking to 16 external data sources, and by harvesting external collection data from libraries, museums, and archives. The portal was released in September 2018 for free public use at: http://biografiasampo.fi.
  17. Fallaw, C.; Dunham, E.; Wickes, E.; Strong, D.; Stein, A.; Zhang, Q.; Rimkus, K.; ill Ingram, B.; Imker, H.J.: Overly honest data repository development (2016) 0.02
    0.015443282 = product of:
      0.046329845 = sum of:
        0.046329845 = product of:
          0.09265969 = sum of:
            0.09265969 = weight(_text_:publishing in 3371) [ClassicSimilarity], result of:
              0.09265969 = score(doc=3371,freq=2.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.37785465 = fieldWeight in 3371, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3371)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    After a year of development, the library at the University of Illinois at Urbana-Champaign has launched a repository, called the Illinois Data Bank (https://databank.illinois.edu/), to provide Illinois researchers with a free, self-serve publishing platform that centralizes, preserves, and provides persistent and reliable access to Illinois research data. This article presents a holistic view of development by discussing our overarching technical, policy, and interface strategies. By openly presenting our design decisions, the rationales behind those decisions, and associated challenges this paper aims to contribute to the library community's work to develop repository services that meet growing data preservation and sharing needs.
  18. Mayo, D.; Bowers, K.: ¬The devil's shoehorn : a case study of EAD to ArchivesSpace migration at a large university (2017) 0.01
    0.014126732 = product of:
      0.042380195 = sum of:
        0.042380195 = weight(_text_:electronic in 3373) [ClassicSimilarity], result of:
          0.042380195 = score(doc=3373,freq=2.0), product of:
            0.19623034 = queryWeight, product of:
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.05019314 = queryNorm
            0.21597168 = fieldWeight in 3373, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3373)
      0.33333334 = coord(1/3)
    
    Abstract
    A band of archivists and IT professionals at Harvard took on a project to convert nearly two million descriptions of archival collection components from marked-up text into the ArchivesSpace archival metadata management system. Starting in the mid-1990s, Harvard was an alpha implementer of EAD, an SGML (later XML) text markup language for electronic inventories, indexes, and finding aids that archivists use to wend their way through the sometimes quirky filing systems that bureaucracies establish for their records or the utter chaos in which some individuals keep their personal archives. These pathfinder documents, designed to cope with messy reality, can themselves be difficult to classify. Portions of them are rigorously structured, while other parts are narrative. Early documents predate the establishment of the standard; many feature idiosyncratic encoding that had been through several machine conversions, while others were freshly encoded and fairly consistent. In this paper, we will cover the practical and technical challenges involved in preparing a large (900MiB) corpus of XML for ingest into an open-source archival information system (ArchivesSpace). This case study will give an overview of the project, discuss problem discovery and problem solving, and address the technical challenges, analysis, solutions, and decisions and provide information on the tools produced and lessons learned. The authors of this piece are Kate Bowers, Collections Services Archivist for Metadata, Systems, and Standards at the Harvard University Archive, and Dave Mayo, a Digital Library Software Engineer for Harvard's Library and Technology Services. Kate was heavily involved in both metadata analysis and later problem solving, while Dave was the sole full-time developer assigned to the migration project.
  19. Röthler, D.: "Lehrautomaten" oder die MOOC-Vision der späten 60er Jahre (2014) 0.01
    0.013600955 = product of:
      0.040802862 = sum of:
        0.040802862 = product of:
          0.081605725 = sum of:
            0.081605725 = weight(_text_:22 in 1552) [ClassicSimilarity], result of:
              0.081605725 = score(doc=1552,freq=2.0), product of:
                0.17576782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05019314 = queryNorm
                0.46428138 = fieldWeight in 1552, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1552)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 6.2018 11:04:35
  20. Networked Knowledge Organisation Systems and Services - TPDL 2011 : The 10th European Networked Knowledge Organisation Systems (NKOS) Workshop (2011) 0.01
    0.0132371 = product of:
      0.0397113 = sum of:
        0.0397113 = product of:
          0.0794226 = sum of:
            0.0794226 = weight(_text_:publishing in 6033) [ClassicSimilarity], result of:
              0.0794226 = score(doc=6033,freq=2.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.32387543 = fieldWeight in 6033, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6033)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Content
    Programm mit Links auf die Präsentationen: Armando Stellato, Ahsan Morshed, Gudrun Johannsen, Yves Jacques, Caterina Caracciolo, Sachit Rajbhandari, Imma Subirats, Johannes Keizer: A Collaborative Framework for Managing and Publishing KOS - Christian Mader, Bernhard Haslhofer: Quality Criteria for Controlled Web Vocabularies - Ahsan Morshed, Benjamin Zapilko, Gudrun Johannsen, Philipp Mayr, Johannes Keizer: Evaluating approaches to automatically match thesauri from different domains for Linked Open Data - Johan De Smedt: SKOS extensions to cover mapping requirements - Mark Tomko: Translating biological data sets Into Linked Data - Daniel Kless: Ontologies and thesauri - similarities and differences - Antoine Isaac, Jacco van Ossenbruggen: Europeana and semantic alignment of vocabularies - Douglas Tudhope: Complementary use of ontologies and (other) KOS - Wilko van Hoek, Brigitte Mathiak, Philipp Mayr, Sascha Schüller: Comparing the accuracy of the semantic similarity provided by the Normalized Google Distance (NGD) and the Search Term Recommender (STR) - Denise Bedford: Selecting and Weighting Semantically Discovered Concepts as Social Tags - Stella Dextre Clarke, Johan De Smedt. ISO 25964-1: a new standard for development of thesauri and exchange of thesaurus data

Languages

  • d 45
  • e 37
  • a 1
  • i 1
  • More… Less…

Types

  • a 53
  • r 3
  • m 2
  • s 2
  • x 2
  • More… Less…