Search (363 results, page 1 of 19)

  • × type_ss:"el"
  • × year_i:[2010 TO 2020}
  1. Araújo, P.C. de.; Tennis, J.; Guimarães, J.A.: Metatheory and knowledge organization (2017) 0.08
    0.08441417 = product of:
      0.12662125 = sum of:
        0.11726358 = weight(_text_:sociology in 3858) [ClassicSimilarity], result of:
          0.11726358 = score(doc=3858,freq=2.0), product of:
            0.30495512 = queryWeight, product of:
              6.9606886 = idf(docFreq=113, maxDocs=44218)
              0.043811057 = queryNorm
            0.38452733 = fieldWeight in 3858, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.9606886 = idf(docFreq=113, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3858)
        0.009357665 = product of:
          0.01871533 = sum of:
            0.01871533 = weight(_text_:of in 3858) [ClassicSimilarity], result of:
              0.01871533 = score(doc=3858,freq=20.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.27317715 = fieldWeight in 3858, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3858)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Metatheory is meta-analytic work that comes from sociology and its purpose is the analysis of theory. Metatheory is a common form of scholarship in knowledge organization (KO). This paper presents an analysis of five papers that are metatheoretical investigations in KO. The papers were published between 2008 and 2015 in the journal Knowledge Organization. The preliminary findings from this paper are that though the authors do metatheoretical work it is not made explicit by the majority of the authors. Of the four types of metatheoretical work, metatheorizing in order to better understand theory (Mu) is most popular. Further, the external/intellectual approach, which imports analytical lenses from other fields, was applied in four of the five papers. And, the use of metatheory as a method of analysis is closely related to these authors' concern about epistemological, theoretical and methodological issues in the KO domain. Metatheory, while not always explicitly acknowledged as a method, is a valuable tool to better understand the foundations, the development of research, and the influence from other domains on KO.
  2. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.03
    0.028993158 = product of:
      0.08697947 = sum of:
        0.08697947 = product of:
          0.34791788 = sum of:
            0.34791788 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.34791788 = score(doc=1826,freq=2.0), product of:
                0.37143064 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.043811057 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.25 = coord(1/4)
      0.33333334 = coord(1/3)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  3. Guidi, F.; Sacerdoti Coen, C.: ¬A survey on retrieval of mathematical knowledge (2015) 0.03
    0.027677055 = product of:
      0.08303116 = sum of:
        0.08303116 = sum of:
          0.023673227 = weight(_text_:of in 5865) [ClassicSimilarity], result of:
            0.023673227 = score(doc=5865,freq=8.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.34554482 = fieldWeight in 5865, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.078125 = fieldNorm(doc=5865)
          0.059357934 = weight(_text_:22 in 5865) [ClassicSimilarity], result of:
            0.059357934 = score(doc=5865,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.38690117 = fieldWeight in 5865, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=5865)
      0.33333334 = coord(1/3)
    
    Abstract
    We present a short survey of the literature on indexing and retrieval of mathematical knowledge, with pointers to 72 papers and tentative taxonomies of both retrieval problems and recurring techniques.
    Date
    22. 2.2017 12:51:57
  4. Sojka, P.; Liska, M.: ¬The art of mathematics retrieval (2011) 0.02
    0.02437083 = product of:
      0.07311249 = sum of:
        0.07311249 = sum of:
          0.014351131 = weight(_text_:of in 3450) [ClassicSimilarity], result of:
            0.014351131 = score(doc=3450,freq=6.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.20947541 = fieldWeight in 3450, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3450)
          0.05876136 = weight(_text_:22 in 3450) [ClassicSimilarity], result of:
            0.05876136 = score(doc=3450,freq=4.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.38301262 = fieldWeight in 3450, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3450)
      0.33333334 = coord(1/3)
    
    Abstract
    The design and architecture of MIaS (Math Indexer and Searcher), a system for mathematics retrieval is presented, and design decisions are discussed. We argue for an approach based on Presentation MathML using a similarity of math subformulae. The system was implemented as a math-aware search engine based on the state-ofthe-art system Apache Lucene. Scalability issues were checked against more than 400,000 arXiv documents with 158 million mathematical formulae. Almost three billion MathML subformulae were indexed using a Solr-compatible Lucene.
    Content
    Vgl.: DocEng2011, September 19-22, 2011, Mountain View, California, USA Copyright 2011 ACM 978-1-4503-0863-2/11/09
    Date
    22. 2.2017 13:00:42
  5. Bensman, S.J.: Eugene Garfield, Francis Narin, and PageRank : the theoretical bases of the Google search engine (2013) 0.02
    0.024179911 = product of:
      0.07253973 = sum of:
        0.07253973 = sum of:
          0.02505339 = weight(_text_:of in 1149) [ClassicSimilarity], result of:
            0.02505339 = score(doc=1149,freq=14.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.36569026 = fieldWeight in 1149, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.0625 = fieldNorm(doc=1149)
          0.047486346 = weight(_text_:22 in 1149) [ClassicSimilarity], result of:
            0.047486346 = score(doc=1149,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.30952093 = fieldWeight in 1149, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1149)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper presents a test of the validity of using Google Scholar to evaluate the publications of researchers by comparing the premises on which its search engine, PageRank, is based, to those of Garfield's theory of citation indexing. It finds that the premises are identical and that PageRank and Garfield's theory of citation indexing validate each other.
    Date
    17.12.2013 11:02:22
  6. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.02
    0.023484759 = product of:
      0.07045428 = sum of:
        0.07045428 = sum of:
          0.020087399 = weight(_text_:of in 1967) [ClassicSimilarity], result of:
            0.020087399 = score(doc=1967,freq=16.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.2932045 = fieldWeight in 1967, product of:
                4.0 = tf(freq=16.0), with freq of:
                  16.0 = termFreq=16.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.046875 = fieldNorm(doc=1967)
          0.05036688 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
            0.05036688 = score(doc=1967,freq=4.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.32829654 = fieldWeight in 1967, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1967)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
  7. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.02
    0.019723108 = product of:
      0.059169322 = sum of:
        0.059169322 = sum of:
          0.023554565 = weight(_text_:of in 4649) [ClassicSimilarity], result of:
            0.023554565 = score(doc=4649,freq=22.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.34381276 = fieldWeight in 4649, product of:
                4.690416 = tf(freq=22.0), with freq of:
                  22.0 = termFreq=22.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.046875 = fieldNorm(doc=4649)
          0.03561476 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
            0.03561476 = score(doc=4649,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.23214069 = fieldWeight in 4649, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4649)
      0.33333334 = coord(1/3)
    
    Abstract
    More and more cultural heritage institutions publish their collections, vocabularies and metadata on the Web. The resulting Web of linked cultural data opens up exciting new possibilities for searching and browsing through these cultural heritage collections. We report on ongoing work in which we investigate the estimation of relevance in this Web of Culture. We study existing measures of semantic distance and how they apply to two use cases. The use cases relate to the structured, multilingual and multimodal nature of the Culture Web. We distinguish between measures using the Web, such as Google distance and PMI, and measures using the Linked Data Web, i.e. the semantic structure of metadata vocabularies. We perform a small study in which we compare these semantic distance measures to human judgements of relevance. Although it is too early to draw any definitive conclusions, the study provides new insights into the applicability of semantic distance measures to the Web of Culture, and clear starting points for further research.
    Date
    26.12.2011 13:40:22
  8. Voß, J.: Classification of knowledge organization systems with Wikidata (2016) 0.02
    0.018134935 = product of:
      0.054404803 = sum of:
        0.054404803 = sum of:
          0.018790042 = weight(_text_:of in 3082) [ClassicSimilarity], result of:
            0.018790042 = score(doc=3082,freq=14.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.2742677 = fieldWeight in 3082, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.046875 = fieldNorm(doc=3082)
          0.03561476 = weight(_text_:22 in 3082) [ClassicSimilarity], result of:
            0.03561476 = score(doc=3082,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.23214069 = fieldWeight in 3082, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3082)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper presents a crowd-sourced classification of knowledge organization systems based on open knowledge base Wikidata. The focus is less on the current result in its rather preliminary form but on the environment and process of categorization in Wikidata and the extraction of KOS from the collaborative database. Benefits and disadvantages are summarized and discussed for application to knowledge organization of other subject areas with Wikidata.
    Pages
    S.15-22
    Source
    Proceedings of the 15th European Networked Knowledge Organization Systems Workshop (NKOS 2016) co-located with the 20th International Conference on Theory and Practice of Digital Libraries 2016 (TPDL 2016), Hannover, Germany, September 9, 2016. Edi. by Philipp Mayr et al. [http://ceur-ws.org/Vol-1676/=urn:nbn:de:0074-1676-5]
  9. Delsey, T.: ¬The Making of RDA (2016) 0.02
    0.017670318 = product of:
      0.053010955 = sum of:
        0.053010955 = sum of:
          0.017396197 = weight(_text_:of in 2946) [ClassicSimilarity], result of:
            0.017396197 = score(doc=2946,freq=12.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.25392252 = fieldWeight in 2946, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.046875 = fieldNorm(doc=2946)
          0.03561476 = weight(_text_:22 in 2946) [ClassicSimilarity], result of:
            0.03561476 = score(doc=2946,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.23214069 = fieldWeight in 2946, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2946)
      0.33333334 = coord(1/3)
    
    Abstract
    The author revisits the development of RDA from its inception in 2005 through to its initial release in 2010. The development effort is set in the context of an evolving digital environment that was transforming both the production and dissemination of information resources and the technologies used to create, store, and access data describing those resources. The author examines the interplay between strategic commitments to align RDA with new conceptual models, emerging database structures, and metadata developments in allied communities, on the one hand, and compatibility with AACR2 legacy databases on the other. Aspects of the development effort examined include the structuring of RDA as a resource description language, organizing the new standard as a working tool, and refining guidelines and instructions for recording RDA data.
    Date
    17. 5.2016 19:22:40
  10. Zanibbi, R.; Yuan, B.: Keyword and image-based retrieval for mathematical expressions (2011) 0.02
    0.017670318 = product of:
      0.053010955 = sum of:
        0.053010955 = sum of:
          0.017396197 = weight(_text_:of in 3449) [ClassicSimilarity], result of:
            0.017396197 = score(doc=3449,freq=12.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.25392252 = fieldWeight in 3449, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.046875 = fieldNorm(doc=3449)
          0.03561476 = weight(_text_:22 in 3449) [ClassicSimilarity], result of:
            0.03561476 = score(doc=3449,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.23214069 = fieldWeight in 3449, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3449)
      0.33333334 = coord(1/3)
    
    Abstract
    Two new methods for retrieving mathematical expressions using conventional keyword search and expression images are presented. An expression-level TF-IDF (term frequency-inverse document frequency) approach is used for keyword search, where queries and indexed expressions are represented by keywords taken from LATEX strings. TF-IDF is computed at the level of individual expressions rather than documents to increase the precision of matching. The second retrieval technique is a form of Content-Base Image Retrieval (CBIR). Expressions are segmented into connected components, and then components in the query expression and each expression in the collection are matched using contour and density features, aspect ratios, and relative positions. In an experiment using ten randomly sampled queries from a corpus of over 22,000 expressions, precision-at-k (k= 20) for the keyword-based approach was higher (keyword: µ= 84.0,s= 19.0, image-based:µ= 32.0,s= 30.7), but for a few of the queries better results were obtained using a combination of the two techniques.
    Date
    22. 2.2017 12:53:49
  11. Treude, L.: ¬Das Problem der Konzeptdefinition in der Wissensorganisation : über einen missglückten Versuch der Klärung (2013) 0.02
    0.016606232 = product of:
      0.049818695 = sum of:
        0.049818695 = sum of:
          0.014203937 = weight(_text_:of in 3060) [ClassicSimilarity], result of:
            0.014203937 = score(doc=3060,freq=8.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.20732689 = fieldWeight in 3060, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.046875 = fieldNorm(doc=3060)
          0.03561476 = weight(_text_:22 in 3060) [ClassicSimilarity], result of:
            0.03561476 = score(doc=3060,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.23214069 = fieldWeight in 3060, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3060)
      0.33333334 = coord(1/3)
    
    Abstract
    Alon Friedman und Richard P. Smiraglia kündigen in ihrem aktuellen Artikel "Nodes and arcs: concept map, semiotics, and knowledge organization" an, eine "empirical demonstration of how the domain [of knowledge organisation] itself understands the meaning of a concept" durchzuführen. Die Klärung des Konzeptbegriffs ist ein begrüßenswertes Vorhaben, das die Autoren in einer empirischen Untersuchung von concept maps (also Konzeptdiagrammen) aus dem Bereich der Wissensorganisation nachvollziehen wollen. Beschränkte sich Friedman 2011 in seinem Artikel "Concept theory and semiotics in knowledge organization" [Fn 01] noch ausschließlich auf Sprache als Medium im Zeichenprozess, bezieht er sich nun auf Visualisierungen als Repräsentationsform und scheint somit seinen Ansatz um den Aspekt der Bildlichkeit zu erweitern. Zumindest erwartet man dies nach der Lektüre der Beschreibung des aktuellen Vorhabens von Friedman und Smiraglia, das - wie die Autoren verkünden - auf einer semiotischen Grundlage durchgeführt worden sei.
    Content
    Vgl.: http://www.libreas.eu/09treude.htm. Bezug zu: Alon Friedman, Richard P. Smiraglia, (2013): Nodes and arcs: concept map, semiotics, and knowledge organization. In: Journal of Documentation, Vol. 69/1, S.27-48.
    Source
    LIBREAS: Library ideas. no.22, 2013, S.xx-xx
  12. Somers, J.: Torching the modern-day library of Alexandria : somewhere at Google there is a database containing 25 million books and nobody is allowed to read them. (2017) 0.02
    0.015483245 = product of:
      0.046449736 = sum of:
        0.046449736 = sum of:
          0.02270656 = weight(_text_:of in 3608) [ClassicSimilarity], result of:
            0.02270656 = score(doc=3608,freq=46.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.33143494 = fieldWeight in 3608, product of:
                6.78233 = tf(freq=46.0), with freq of:
                  46.0 = termFreq=46.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.03125 = fieldNorm(doc=3608)
          0.023743173 = weight(_text_:22 in 3608) [ClassicSimilarity], result of:
            0.023743173 = score(doc=3608,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.15476047 = fieldWeight in 3608, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=3608)
      0.33333334 = coord(1/3)
    
    Abstract
    You were going to get one-click access to the full text of nearly every book that's ever been published. Books still in print you'd have to pay for, but everything else-a collection slated to grow larger than the holdings at the Library of Congress, Harvard, the University of Michigan, at any of the great national libraries of Europe-would have been available for free at terminals that were going to be placed in every local library that wanted one. At the terminal you were going to be able to search tens of millions of books and read every page of any book you found. You'd be able to highlight passages and make annotations and share them; for the first time, you'd be able to pinpoint an idea somewhere inside the vastness of the printed record, and send somebody straight to it with a link. Books would become as instantly available, searchable, copy-pasteable-as alive in the digital world-as web pages. It was to be the realization of a long-held dream. "The universal library has been talked about for millennia," Richard Ovenden, the head of Oxford's Bodleian Libraries, has said. "It was possible to think in the Renaissance that you might be able to amass the whole of published knowledge in a single room or a single institution." In the spring of 2011, it seemed we'd amassed it in a terminal small enough to fit on a desk. "This is a watershed event and can serve as a catalyst for the reinvention of education, research, and intellectual life," one eager observer wrote at the time. On March 22 of that year, however, the legal agreement that would have unlocked a century's worth of books and peppered the country with access terminals to a universal library was rejected under Rule 23(e)(2) of the Federal Rules of Civil Procedure by the U.S. District Court for the Southern District of New York. When the library at Alexandria burned it was said to be an "international catastrophe." When the most significant humanities project of our time was dismantled in court, the scholars, archivists, and librarians who'd had a hand in its undoing breathed a sigh of relief, for they believed, at the time, that they had narrowly averted disaster.
    Source
    https://www.theatlantic.com/technology/archive/2017/04/the-tragedy-of-google-books/523320/
  13. Roy, W.; Gray, C.: Preparing existing metadata for repository batch import : a recipe for a fickle food (2018) 0.02
    0.015472822 = product of:
      0.046418466 = sum of:
        0.046418466 = sum of:
          0.016739499 = weight(_text_:of in 4550) [ClassicSimilarity], result of:
            0.016739499 = score(doc=4550,freq=16.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.24433708 = fieldWeight in 4550, product of:
                4.0 = tf(freq=16.0), with freq of:
                  16.0 = termFreq=16.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4550)
          0.029678967 = weight(_text_:22 in 4550) [ClassicSimilarity], result of:
            0.029678967 = score(doc=4550,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.19345059 = fieldWeight in 4550, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4550)
      0.33333334 = coord(1/3)
    
    Abstract
    In 2016, the University of Waterloo began offering a mediated copyright review and deposit service to support the growth of our institutional repository UWSpace. This resulted in the need to batch import large lists of published works into the institutional repository quickly and accurately. A range of methods have been proposed for harvesting publications metadata en masse, but many technological solutions can easily become detached from a workflow that is both reproducible for support staff and applicable to a range of situations. Many repositories offer the capacity for batch upload via CSV, so our method provides a template Python script that leverages the Habanero library for populating CSV files with existing metadata retrieved from the CrossRef API. In our case, we have combined this with useful metadata contained in a TSV file downloaded from Web of Science in order to enrich our metadata as well. The appeal of this 'low-maintenance' method is that it provides more robust options for gathering metadata semi-automatically, and only requires the user's ability to access Web of Science and the Python program, while still remaining flexible enough for local customizations.
    Date
    10.11.2018 16:27:22
  14. Open MIND (2015) 0.02
    0.015112445 = product of:
      0.045337334 = sum of:
        0.045337334 = sum of:
          0.015658367 = weight(_text_:of in 1648) [ClassicSimilarity], result of:
            0.015658367 = score(doc=1648,freq=14.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.22855641 = fieldWeight in 1648, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1648)
          0.029678967 = weight(_text_:22 in 1648) [ClassicSimilarity], result of:
            0.029678967 = score(doc=1648,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.19345059 = fieldWeight in 1648, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1648)
      0.33333334 = coord(1/3)
    
    Abstract
    This is an edited collection of 39 original papers and as many commentaries and replies. The target papers and replies were written by senior members of the MIND Group, while all commentaries were written by junior group members. All papers and commentaries have undergone a rigorous process of anonymous peer review, during which the junior members of the MIND Group acted as reviewers. The final versions of all the target articles, commentaries and replies have undergone additional editorial review. Besides offering a cross-section of ongoing, cutting-edge research in philosophy and cognitive science, this collection is also intended to be a free electronic resource for teaching. It therefore also contains a selection of online supporting materials, pointers to video and audio files and to additional free material supplied by the 92 authors represented in this volume. We will add more multimedia material, a searchable literature database, and tools to work with the online version in the future. All contributions to this collection are strictly open access. They can be downloaded, printed, and reproduced by anyone.
    Date
    27. 1.2015 11:48:22
  15. Dowding, H.; Gengenbach, M.; Graham, B.; Meister, S.; Moran, J.; Peltzman, S.; Seifert, J.; Waugh, D.: OSS4EVA: using open-source tools to fulfill digital preservation requirements (2016) 0.01
    0.014725267 = product of:
      0.0441758 = sum of:
        0.0441758 = sum of:
          0.014496832 = weight(_text_:of in 3200) [ClassicSimilarity], result of:
            0.014496832 = score(doc=3200,freq=12.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.21160212 = fieldWeight in 3200, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3200)
          0.029678967 = weight(_text_:22 in 3200) [ClassicSimilarity], result of:
            0.029678967 = score(doc=3200,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.19345059 = fieldWeight in 3200, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3200)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper builds on the findings of a workshop held at the 2015 International Conference on Digital Preservation (iPRES), entitled, "Using Open-Source Tools to Fulfill Digital Preservation Requirements" (OSS4PRES hereafter). This day-long workshop brought together participants from across the library and archives community, including practitioners proprietary vendors, and representatives from open-source projects. The resulting conversations were surprisingly revealing: while OSS' significance within the preservation landscape was made clear, participants noted that there are a number of roadblocks that discourage or altogether prevent its use in many organizations. Overcoming these challenges will be necessary to further widespread, sustainable OSS adoption within the digital preservation community. This article will mine the rich discussions that took place at OSS4PRES to (1) summarize the workshop's key themes and major points of debate, (2) provide a comprehensive analysis of the opportunities, gaps, and challenges that using OSS entails at a philosophical, institutional, and individual level, and (3) offer a tangible set of recommendations for future work designed to broaden community engagement and enhance the sustainability of open source initiatives, drawing on both participants' experience as well as additional research.
    Date
    28.10.2016 18:22:33
  16. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.01
    0.014496579 = product of:
      0.043489736 = sum of:
        0.043489736 = product of:
          0.17395894 = sum of:
            0.17395894 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.17395894 = score(doc=4388,freq=2.0), product of:
                0.37143064 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.043811057 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.25 = coord(1/4)
      0.33333334 = coord(1/3)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  17. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.01
    0.014304235 = product of:
      0.042912703 = sum of:
        0.042912703 = sum of:
          0.013233736 = weight(_text_:of in 4553) [ClassicSimilarity], result of:
            0.013233736 = score(doc=4553,freq=10.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.19316542 = fieldWeight in 4553, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4553)
          0.029678967 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
            0.029678967 = score(doc=4553,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.19345059 = fieldWeight in 4553, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4553)
      0.33333334 = coord(1/3)
    
    Abstract
    Semantic Web knowledge representation standards, and in particular RDF and OWL, often come endowed with a formal semantics which is considered to be of fundamental importance for the field. Reasoning, i.e., the drawing of logical inferences from knowledge expressed in such standards, is traditionally based on logical deductive methods and algorithms which can be proven to be sound and complete and terminating, i.e. correct in a very strong sense. For various reasons, though, in particular the scalability issues arising from the ever increasing amounts of Semantic Web data available and the inability of deductive algorithms to deal with noise in the data, it has been argued that alternative means of reasoning should be investigated which bear high promise for high scalability and better robustness. From this perspective, deductive algorithms can be considered the gold standard regarding correctness against which alternative methods need to be tested. In this paper, we show that it is possible to train a Deep Learning system on RDF knowledge graphs, such that it is able to perform reasoning over new RDF knowledge graphs, with high precision and recall compared to the deductive gold standard.
    Date
    16.11.2018 14:22:01
  18. Franke, F.: ¬Das Framework for Information Literacy : neue Impulse für die Förderung von Informationskompetenz in Deutschland?! (2017) 0.01
    0.014238909 = product of:
      0.042716727 = sum of:
        0.042716727 = sum of:
          0.0071019684 = weight(_text_:of in 2248) [ClassicSimilarity], result of:
            0.0071019684 = score(doc=2248,freq=2.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.103663445 = fieldWeight in 2248, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.046875 = fieldNorm(doc=2248)
          0.03561476 = weight(_text_:22 in 2248) [ClassicSimilarity], result of:
            0.03561476 = score(doc=2248,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.23214069 = fieldWeight in 2248, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2248)
      0.33333334 = coord(1/3)
    
    Abstract
    Das Framework for Information Literacy for Higher Education wurde im Januar 2016 vom Vorstand der Association of College & Research Libraries (ACRL) beschlossen. Es beruht auf der Idee von "Threshold Concepts" und sieht Informationskompetenz in einem engen Zusammenhang mit Wissenschaft und Forschung. Dadurch legt es bei der Vermittlung von Informationskompetenz eine starke Betonung auf das "Warum", nicht nur auf das "Was". Der Ansatz des Framework wird vielfach kontrovers diskutiert. Bietet er tatsächlich eine neue Sichtweise auf die Förderung von Informationskompetenz oder ist er überwiegend alter Wein in neuen Schläuchen? Kann das Framework neue Impulse für die Aktivitäten an den Bibliotheken in Deutschland setzen oder beschreibt es etwas, was wir längst machen? Der Beitrag versucht, Anregungen zu geben, welche Konsequenzen das Framework für unsere Kurse haben kann und welche veränderten Lernziele mit ihm verbunden sein können. Dabei plädiert er für ein umfassendes Verständnis von Informationskompetenz, das sich nicht auf Einzelaspekte wie Recherchekompetenz beschränkt.
    Source
    o-bib: Das offene Bibliotheksjournal. 4(2017) Nr.4, S.22-29
  19. Wolchover, N.: Wie ein Aufsehen erregender Beweis kaum Beachtung fand (2017) 0.01
    0.013990799 = product of:
      0.041972395 = sum of:
        0.041972395 = product of:
          0.08394479 = sum of:
            0.08394479 = weight(_text_:22 in 3582) [ClassicSimilarity], result of:
              0.08394479 = score(doc=3582,freq=4.0), product of:
                0.15341885 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043811057 = queryNorm
                0.54716086 = fieldWeight in 3582, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3582)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 4.2017 10:42:05
    22. 4.2017 10:48:38
  20. Hafner, R.; Schelling, B.: Automatisierung der Sacherschließung mit Semantic Web Technologie (2015) 0.01
    0.013850185 = product of:
      0.041550554 = sum of:
        0.041550554 = product of:
          0.08310111 = sum of:
            0.08310111 = weight(_text_:22 in 8365) [ClassicSimilarity], result of:
              0.08310111 = score(doc=8365,freq=2.0), product of:
                0.15341885 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043811057 = queryNorm
                0.5416616 = fieldWeight in 8365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=8365)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 6.2015 16:08:38

Languages

  • e 283
  • d 65
  • i 6
  • f 2
  • a 1
  • el 1
  • es 1
  • More… Less…

Types

  • a 236
  • s 14
  • x 10
  • r 8
  • m 4
  • n 2
  • b 1
  • i 1
  • More… Less…