Search (283 results, page 1 of 15)

  • × language_ss:"e"
  • × type_ss:"el"
  • × year_i:[2010 TO 2020}
  1. Araújo, P.C. de.; Tennis, J.; Guimarães, J.A.: Metatheory and knowledge organization (2017) 0.08
    0.08441417 = product of:
      0.12662125 = sum of:
        0.11726358 = weight(_text_:sociology in 3858) [ClassicSimilarity], result of:
          0.11726358 = score(doc=3858,freq=2.0), product of:
            0.30495512 = queryWeight, product of:
              6.9606886 = idf(docFreq=113, maxDocs=44218)
              0.043811057 = queryNorm
            0.38452733 = fieldWeight in 3858, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.9606886 = idf(docFreq=113, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3858)
        0.009357665 = product of:
          0.01871533 = sum of:
            0.01871533 = weight(_text_:of in 3858) [ClassicSimilarity], result of:
              0.01871533 = score(doc=3858,freq=20.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.27317715 = fieldWeight in 3858, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3858)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Metatheory is meta-analytic work that comes from sociology and its purpose is the analysis of theory. Metatheory is a common form of scholarship in knowledge organization (KO). This paper presents an analysis of five papers that are metatheoretical investigations in KO. The papers were published between 2008 and 2015 in the journal Knowledge Organization. The preliminary findings from this paper are that though the authors do metatheoretical work it is not made explicit by the majority of the authors. Of the four types of metatheoretical work, metatheorizing in order to better understand theory (Mu) is most popular. Further, the external/intellectual approach, which imports analytical lenses from other fields, was applied in four of the five papers. And, the use of metatheory as a method of analysis is closely related to these authors' concern about epistemological, theoretical and methodological issues in the KO domain. Metatheory, while not always explicitly acknowledged as a method, is a valuable tool to better understand the foundations, the development of research, and the influence from other domains on KO.
  2. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.03
    0.028993158 = product of:
      0.08697947 = sum of:
        0.08697947 = product of:
          0.34791788 = sum of:
            0.34791788 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.34791788 = score(doc=1826,freq=2.0), product of:
                0.37143064 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.043811057 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.25 = coord(1/4)
      0.33333334 = coord(1/3)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  3. Guidi, F.; Sacerdoti Coen, C.: ¬A survey on retrieval of mathematical knowledge (2015) 0.03
    0.027677055 = product of:
      0.08303116 = sum of:
        0.08303116 = sum of:
          0.023673227 = weight(_text_:of in 5865) [ClassicSimilarity], result of:
            0.023673227 = score(doc=5865,freq=8.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.34554482 = fieldWeight in 5865, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.078125 = fieldNorm(doc=5865)
          0.059357934 = weight(_text_:22 in 5865) [ClassicSimilarity], result of:
            0.059357934 = score(doc=5865,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.38690117 = fieldWeight in 5865, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=5865)
      0.33333334 = coord(1/3)
    
    Abstract
    We present a short survey of the literature on indexing and retrieval of mathematical knowledge, with pointers to 72 papers and tentative taxonomies of both retrieval problems and recurring techniques.
    Date
    22. 2.2017 12:51:57
  4. Sojka, P.; Liska, M.: ¬The art of mathematics retrieval (2011) 0.02
    0.02437083 = product of:
      0.07311249 = sum of:
        0.07311249 = sum of:
          0.014351131 = weight(_text_:of in 3450) [ClassicSimilarity], result of:
            0.014351131 = score(doc=3450,freq=6.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.20947541 = fieldWeight in 3450, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3450)
          0.05876136 = weight(_text_:22 in 3450) [ClassicSimilarity], result of:
            0.05876136 = score(doc=3450,freq=4.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.38301262 = fieldWeight in 3450, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3450)
      0.33333334 = coord(1/3)
    
    Abstract
    The design and architecture of MIaS (Math Indexer and Searcher), a system for mathematics retrieval is presented, and design decisions are discussed. We argue for an approach based on Presentation MathML using a similarity of math subformulae. The system was implemented as a math-aware search engine based on the state-ofthe-art system Apache Lucene. Scalability issues were checked against more than 400,000 arXiv documents with 158 million mathematical formulae. Almost three billion MathML subformulae were indexed using a Solr-compatible Lucene.
    Content
    Vgl.: DocEng2011, September 19-22, 2011, Mountain View, California, USA Copyright 2011 ACM 978-1-4503-0863-2/11/09
    Date
    22. 2.2017 13:00:42
  5. Bensman, S.J.: Eugene Garfield, Francis Narin, and PageRank : the theoretical bases of the Google search engine (2013) 0.02
    0.024179911 = product of:
      0.07253973 = sum of:
        0.07253973 = sum of:
          0.02505339 = weight(_text_:of in 1149) [ClassicSimilarity], result of:
            0.02505339 = score(doc=1149,freq=14.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.36569026 = fieldWeight in 1149, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.0625 = fieldNorm(doc=1149)
          0.047486346 = weight(_text_:22 in 1149) [ClassicSimilarity], result of:
            0.047486346 = score(doc=1149,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.30952093 = fieldWeight in 1149, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1149)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper presents a test of the validity of using Google Scholar to evaluate the publications of researchers by comparing the premises on which its search engine, PageRank, is based, to those of Garfield's theory of citation indexing. It finds that the premises are identical and that PageRank and Garfield's theory of citation indexing validate each other.
    Date
    17.12.2013 11:02:22
  6. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.02
    0.023484759 = product of:
      0.07045428 = sum of:
        0.07045428 = sum of:
          0.020087399 = weight(_text_:of in 1967) [ClassicSimilarity], result of:
            0.020087399 = score(doc=1967,freq=16.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.2932045 = fieldWeight in 1967, product of:
                4.0 = tf(freq=16.0), with freq of:
                  16.0 = termFreq=16.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.046875 = fieldNorm(doc=1967)
          0.05036688 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
            0.05036688 = score(doc=1967,freq=4.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.32829654 = fieldWeight in 1967, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1967)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
  7. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.02
    0.019723108 = product of:
      0.059169322 = sum of:
        0.059169322 = sum of:
          0.023554565 = weight(_text_:of in 4649) [ClassicSimilarity], result of:
            0.023554565 = score(doc=4649,freq=22.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.34381276 = fieldWeight in 4649, product of:
                4.690416 = tf(freq=22.0), with freq of:
                  22.0 = termFreq=22.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.046875 = fieldNorm(doc=4649)
          0.03561476 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
            0.03561476 = score(doc=4649,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.23214069 = fieldWeight in 4649, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4649)
      0.33333334 = coord(1/3)
    
    Abstract
    More and more cultural heritage institutions publish their collections, vocabularies and metadata on the Web. The resulting Web of linked cultural data opens up exciting new possibilities for searching and browsing through these cultural heritage collections. We report on ongoing work in which we investigate the estimation of relevance in this Web of Culture. We study existing measures of semantic distance and how they apply to two use cases. The use cases relate to the structured, multilingual and multimodal nature of the Culture Web. We distinguish between measures using the Web, such as Google distance and PMI, and measures using the Linked Data Web, i.e. the semantic structure of metadata vocabularies. We perform a small study in which we compare these semantic distance measures to human judgements of relevance. Although it is too early to draw any definitive conclusions, the study provides new insights into the applicability of semantic distance measures to the Web of Culture, and clear starting points for further research.
    Date
    26.12.2011 13:40:22
  8. Voß, J.: Classification of knowledge organization systems with Wikidata (2016) 0.02
    0.018134935 = product of:
      0.054404803 = sum of:
        0.054404803 = sum of:
          0.018790042 = weight(_text_:of in 3082) [ClassicSimilarity], result of:
            0.018790042 = score(doc=3082,freq=14.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.2742677 = fieldWeight in 3082, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.046875 = fieldNorm(doc=3082)
          0.03561476 = weight(_text_:22 in 3082) [ClassicSimilarity], result of:
            0.03561476 = score(doc=3082,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.23214069 = fieldWeight in 3082, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3082)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper presents a crowd-sourced classification of knowledge organization systems based on open knowledge base Wikidata. The focus is less on the current result in its rather preliminary form but on the environment and process of categorization in Wikidata and the extraction of KOS from the collaborative database. Benefits and disadvantages are summarized and discussed for application to knowledge organization of other subject areas with Wikidata.
    Pages
    S.15-22
    Source
    Proceedings of the 15th European Networked Knowledge Organization Systems Workshop (NKOS 2016) co-located with the 20th International Conference on Theory and Practice of Digital Libraries 2016 (TPDL 2016), Hannover, Germany, September 9, 2016. Edi. by Philipp Mayr et al. [http://ceur-ws.org/Vol-1676/=urn:nbn:de:0074-1676-5]
  9. Delsey, T.: ¬The Making of RDA (2016) 0.02
    0.017670318 = product of:
      0.053010955 = sum of:
        0.053010955 = sum of:
          0.017396197 = weight(_text_:of in 2946) [ClassicSimilarity], result of:
            0.017396197 = score(doc=2946,freq=12.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.25392252 = fieldWeight in 2946, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.046875 = fieldNorm(doc=2946)
          0.03561476 = weight(_text_:22 in 2946) [ClassicSimilarity], result of:
            0.03561476 = score(doc=2946,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.23214069 = fieldWeight in 2946, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2946)
      0.33333334 = coord(1/3)
    
    Abstract
    The author revisits the development of RDA from its inception in 2005 through to its initial release in 2010. The development effort is set in the context of an evolving digital environment that was transforming both the production and dissemination of information resources and the technologies used to create, store, and access data describing those resources. The author examines the interplay between strategic commitments to align RDA with new conceptual models, emerging database structures, and metadata developments in allied communities, on the one hand, and compatibility with AACR2 legacy databases on the other. Aspects of the development effort examined include the structuring of RDA as a resource description language, organizing the new standard as a working tool, and refining guidelines and instructions for recording RDA data.
    Date
    17. 5.2016 19:22:40
  10. Zanibbi, R.; Yuan, B.: Keyword and image-based retrieval for mathematical expressions (2011) 0.02
    0.017670318 = product of:
      0.053010955 = sum of:
        0.053010955 = sum of:
          0.017396197 = weight(_text_:of in 3449) [ClassicSimilarity], result of:
            0.017396197 = score(doc=3449,freq=12.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.25392252 = fieldWeight in 3449, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.046875 = fieldNorm(doc=3449)
          0.03561476 = weight(_text_:22 in 3449) [ClassicSimilarity], result of:
            0.03561476 = score(doc=3449,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.23214069 = fieldWeight in 3449, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3449)
      0.33333334 = coord(1/3)
    
    Abstract
    Two new methods for retrieving mathematical expressions using conventional keyword search and expression images are presented. An expression-level TF-IDF (term frequency-inverse document frequency) approach is used for keyword search, where queries and indexed expressions are represented by keywords taken from LATEX strings. TF-IDF is computed at the level of individual expressions rather than documents to increase the precision of matching. The second retrieval technique is a form of Content-Base Image Retrieval (CBIR). Expressions are segmented into connected components, and then components in the query expression and each expression in the collection are matched using contour and density features, aspect ratios, and relative positions. In an experiment using ten randomly sampled queries from a corpus of over 22,000 expressions, precision-at-k (k= 20) for the keyword-based approach was higher (keyword: µ= 84.0,s= 19.0, image-based:µ= 32.0,s= 30.7), but for a few of the queries better results were obtained using a combination of the two techniques.
    Date
    22. 2.2017 12:53:49
  11. Somers, J.: Torching the modern-day library of Alexandria : somewhere at Google there is a database containing 25 million books and nobody is allowed to read them. (2017) 0.02
    0.015483245 = product of:
      0.046449736 = sum of:
        0.046449736 = sum of:
          0.02270656 = weight(_text_:of in 3608) [ClassicSimilarity], result of:
            0.02270656 = score(doc=3608,freq=46.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.33143494 = fieldWeight in 3608, product of:
                6.78233 = tf(freq=46.0), with freq of:
                  46.0 = termFreq=46.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.03125 = fieldNorm(doc=3608)
          0.023743173 = weight(_text_:22 in 3608) [ClassicSimilarity], result of:
            0.023743173 = score(doc=3608,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.15476047 = fieldWeight in 3608, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=3608)
      0.33333334 = coord(1/3)
    
    Abstract
    You were going to get one-click access to the full text of nearly every book that's ever been published. Books still in print you'd have to pay for, but everything else-a collection slated to grow larger than the holdings at the Library of Congress, Harvard, the University of Michigan, at any of the great national libraries of Europe-would have been available for free at terminals that were going to be placed in every local library that wanted one. At the terminal you were going to be able to search tens of millions of books and read every page of any book you found. You'd be able to highlight passages and make annotations and share them; for the first time, you'd be able to pinpoint an idea somewhere inside the vastness of the printed record, and send somebody straight to it with a link. Books would become as instantly available, searchable, copy-pasteable-as alive in the digital world-as web pages. It was to be the realization of a long-held dream. "The universal library has been talked about for millennia," Richard Ovenden, the head of Oxford's Bodleian Libraries, has said. "It was possible to think in the Renaissance that you might be able to amass the whole of published knowledge in a single room or a single institution." In the spring of 2011, it seemed we'd amassed it in a terminal small enough to fit on a desk. "This is a watershed event and can serve as a catalyst for the reinvention of education, research, and intellectual life," one eager observer wrote at the time. On March 22 of that year, however, the legal agreement that would have unlocked a century's worth of books and peppered the country with access terminals to a universal library was rejected under Rule 23(e)(2) of the Federal Rules of Civil Procedure by the U.S. District Court for the Southern District of New York. When the library at Alexandria burned it was said to be an "international catastrophe." When the most significant humanities project of our time was dismantled in court, the scholars, archivists, and librarians who'd had a hand in its undoing breathed a sigh of relief, for they believed, at the time, that they had narrowly averted disaster.
    Source
    https://www.theatlantic.com/technology/archive/2017/04/the-tragedy-of-google-books/523320/
  12. Roy, W.; Gray, C.: Preparing existing metadata for repository batch import : a recipe for a fickle food (2018) 0.02
    0.015472822 = product of:
      0.046418466 = sum of:
        0.046418466 = sum of:
          0.016739499 = weight(_text_:of in 4550) [ClassicSimilarity], result of:
            0.016739499 = score(doc=4550,freq=16.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.24433708 = fieldWeight in 4550, product of:
                4.0 = tf(freq=16.0), with freq of:
                  16.0 = termFreq=16.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4550)
          0.029678967 = weight(_text_:22 in 4550) [ClassicSimilarity], result of:
            0.029678967 = score(doc=4550,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.19345059 = fieldWeight in 4550, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4550)
      0.33333334 = coord(1/3)
    
    Abstract
    In 2016, the University of Waterloo began offering a mediated copyright review and deposit service to support the growth of our institutional repository UWSpace. This resulted in the need to batch import large lists of published works into the institutional repository quickly and accurately. A range of methods have been proposed for harvesting publications metadata en masse, but many technological solutions can easily become detached from a workflow that is both reproducible for support staff and applicable to a range of situations. Many repositories offer the capacity for batch upload via CSV, so our method provides a template Python script that leverages the Habanero library for populating CSV files with existing metadata retrieved from the CrossRef API. In our case, we have combined this with useful metadata contained in a TSV file downloaded from Web of Science in order to enrich our metadata as well. The appeal of this 'low-maintenance' method is that it provides more robust options for gathering metadata semi-automatically, and only requires the user's ability to access Web of Science and the Python program, while still remaining flexible enough for local customizations.
    Date
    10.11.2018 16:27:22
  13. Open MIND (2015) 0.02
    0.015112445 = product of:
      0.045337334 = sum of:
        0.045337334 = sum of:
          0.015658367 = weight(_text_:of in 1648) [ClassicSimilarity], result of:
            0.015658367 = score(doc=1648,freq=14.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.22855641 = fieldWeight in 1648, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1648)
          0.029678967 = weight(_text_:22 in 1648) [ClassicSimilarity], result of:
            0.029678967 = score(doc=1648,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.19345059 = fieldWeight in 1648, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1648)
      0.33333334 = coord(1/3)
    
    Abstract
    This is an edited collection of 39 original papers and as many commentaries and replies. The target papers and replies were written by senior members of the MIND Group, while all commentaries were written by junior group members. All papers and commentaries have undergone a rigorous process of anonymous peer review, during which the junior members of the MIND Group acted as reviewers. The final versions of all the target articles, commentaries and replies have undergone additional editorial review. Besides offering a cross-section of ongoing, cutting-edge research in philosophy and cognitive science, this collection is also intended to be a free electronic resource for teaching. It therefore also contains a selection of online supporting materials, pointers to video and audio files and to additional free material supplied by the 92 authors represented in this volume. We will add more multimedia material, a searchable literature database, and tools to work with the online version in the future. All contributions to this collection are strictly open access. They can be downloaded, printed, and reproduced by anyone.
    Date
    27. 1.2015 11:48:22
  14. Dowding, H.; Gengenbach, M.; Graham, B.; Meister, S.; Moran, J.; Peltzman, S.; Seifert, J.; Waugh, D.: OSS4EVA: using open-source tools to fulfill digital preservation requirements (2016) 0.01
    0.014725267 = product of:
      0.0441758 = sum of:
        0.0441758 = sum of:
          0.014496832 = weight(_text_:of in 3200) [ClassicSimilarity], result of:
            0.014496832 = score(doc=3200,freq=12.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.21160212 = fieldWeight in 3200, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3200)
          0.029678967 = weight(_text_:22 in 3200) [ClassicSimilarity], result of:
            0.029678967 = score(doc=3200,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.19345059 = fieldWeight in 3200, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3200)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper builds on the findings of a workshop held at the 2015 International Conference on Digital Preservation (iPRES), entitled, "Using Open-Source Tools to Fulfill Digital Preservation Requirements" (OSS4PRES hereafter). This day-long workshop brought together participants from across the library and archives community, including practitioners proprietary vendors, and representatives from open-source projects. The resulting conversations were surprisingly revealing: while OSS' significance within the preservation landscape was made clear, participants noted that there are a number of roadblocks that discourage or altogether prevent its use in many organizations. Overcoming these challenges will be necessary to further widespread, sustainable OSS adoption within the digital preservation community. This article will mine the rich discussions that took place at OSS4PRES to (1) summarize the workshop's key themes and major points of debate, (2) provide a comprehensive analysis of the opportunities, gaps, and challenges that using OSS entails at a philosophical, institutional, and individual level, and (3) offer a tangible set of recommendations for future work designed to broaden community engagement and enhance the sustainability of open source initiatives, drawing on both participants' experience as well as additional research.
    Date
    28.10.2016 18:22:33
  15. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.01
    0.014304235 = product of:
      0.042912703 = sum of:
        0.042912703 = sum of:
          0.013233736 = weight(_text_:of in 4553) [ClassicSimilarity], result of:
            0.013233736 = score(doc=4553,freq=10.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.19316542 = fieldWeight in 4553, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4553)
          0.029678967 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
            0.029678967 = score(doc=4553,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.19345059 = fieldWeight in 4553, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4553)
      0.33333334 = coord(1/3)
    
    Abstract
    Semantic Web knowledge representation standards, and in particular RDF and OWL, often come endowed with a formal semantics which is considered to be of fundamental importance for the field. Reasoning, i.e., the drawing of logical inferences from knowledge expressed in such standards, is traditionally based on logical deductive methods and algorithms which can be proven to be sound and complete and terminating, i.e. correct in a very strong sense. For various reasons, though, in particular the scalability issues arising from the ever increasing amounts of Semantic Web data available and the inability of deductive algorithms to deal with noise in the data, it has been argued that alternative means of reasoning should be investigated which bear high promise for high scalability and better robustness. From this perspective, deductive algorithms can be considered the gold standard regarding correctness against which alternative methods need to be tested. In this paper, we show that it is possible to train a Deep Learning system on RDF knowledge graphs, such that it is able to perform reasoning over new RDF knowledge graphs, with high precision and recall compared to the deductive gold standard.
    Date
    16.11.2018 14:22:01
  16. Thornton, K: Powerful structure : inspecting infrastructures of information organization in Wikimedia Foundation projects (2016) 0.00
    0.004880361 = product of:
      0.014641082 = sum of:
        0.014641082 = product of:
          0.029282164 = sum of:
            0.029282164 = weight(_text_:of in 3288) [ClassicSimilarity], result of:
              0.029282164 = score(doc=3288,freq=34.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.4274153 = fieldWeight in 3288, product of:
                  5.8309517 = tf(freq=34.0), with freq of:
                    34.0 = termFreq=34.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3288)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This dissertation investigates the social and technological factors of collaboratively organizing information in commons-based peer production systems. To do so, it analyzes the diverse strategies that members of Wikimedia Foundation (WMF) project communities use to organize information. Key findings from this dissertation show that conceptual structures of information organization are encoded into the infrastructure of WMF projects. The fact that WMF projects are commons-based peer production systems means that we can inspect the code that enables these systems, but a specific type of technical literacy is required to do so. I use three methods in this dissertation. I conduct a qualitative content analysis of the discussions surrounding the design, implementation and evaluation of the category system; a quantitative analysis using descriptive statistics of patterns of editing among editors who contributed to the code of templates for information boxes; and a close reading of the infrastructure used to create the category system, the infobox templates, and the knowledge base of structured data.
    Footnote
    A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy University of Washington.
  17. Gödert, W.: ¬An ontology-based model for indexing and retrieval (2013) 0.00
    0.004463867 = product of:
      0.0133916 = sum of:
        0.0133916 = product of:
          0.0267832 = sum of:
            0.0267832 = weight(_text_:of in 1510) [ClassicSimilarity], result of:
              0.0267832 = score(doc=1510,freq=16.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.39093933 = fieldWeight in 1510, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1510)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Starting from an unsolved problem of information retrieval this paper presents an ontology-based model for indexing and retrieval. The model combines the methods and experiences of cognitive-to-interpret indexing languages with the strengths and possibilities of formal knowledge representation. The core component of the model uses inferences along the paths of typed relations between the entities of a knowledge representation for enabling the determination of hit quantities in the context of retrieval processes. The entities are arranged in aspect-oriented facets to ensure a consistent hierarchical structure. The possible consequences for indexing and retrieval are discussed.
  18. Gödert, W.; Lepsky, K.: Reception of externalized knowledge : a constructivistic model based on Popper's Three Worlds and Searle's Collective Intentionality (2019) 0.00
    0.004463867 = product of:
      0.0133916 = sum of:
        0.0133916 = product of:
          0.0267832 = sum of:
            0.0267832 = weight(_text_:of in 5205) [ClassicSimilarity], result of:
              0.0267832 = score(doc=5205,freq=16.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.39093933 = fieldWeight in 5205, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5205)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    We provide a model for the reception of knowledge from externalized information sources. The model is based on a cognitive understanding of information processing and draws up ideas of an exchange of information in communication processes. Karl Popper's three-world theory with its orientation on falsifiable scientific knowledge is extended by John Searle's concept of collective intentionality. This allows a consistent description of externalization and reception of knowledge including scientific knowledge as well as everyday knowledge.
  19. Gore, E.; Bitta, M.D.; Cohen, D.: ¬The Digital Public Library of America and the National Digital Platform (2017) 0.00
    0.004428855 = product of:
      0.013286565 = sum of:
        0.013286565 = product of:
          0.02657313 = sum of:
            0.02657313 = weight(_text_:of in 3655) [ClassicSimilarity], result of:
              0.02657313 = score(doc=3655,freq=28.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.38787308 = fieldWeight in 3655, product of:
                  5.2915025 = tf(freq=28.0), with freq of:
                    28.0 = termFreq=28.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3655)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The Digital Public Library of America brings together the riches of America's libraries, archives, and museums, and makes them freely available to the world. In order to do this, DPLA has had to build elements of the national digital platform to connect to those institutions and to serve their digitized materials to audiences. In this article, we detail the construction of two critical elements of our work: the decentralized national network of "hubs," which operate in states across the country; and a version of the Hydra repository software that is tailored to the needs of our community. This technology and the organizations that make use of it serve as the foundation of the future of DPLA and other projects that seek to take advantage of the national digital platform.
    Object
    Digital Public Library of America
  20. Cossham, A.F.: Models of the bibliographic universe (2017) 0.00
    0.00436691 = product of:
      0.01310073 = sum of:
        0.01310073 = product of:
          0.02620146 = sum of:
            0.02620146 = weight(_text_:of in 3817) [ClassicSimilarity], result of:
              0.02620146 = score(doc=3817,freq=20.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.38244802 = fieldWeight in 3817, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3817)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    What kinds of mental models do library catalogue users have of the bibliographic universe in an age of online and electronic information? Using phenomenography and grounded analysis, it identifies participants' understanding, experience, and conceptualisation of the bibliographic universe, and identifies their expectations when using library catalogues. It contrasts participants' mental models with existing LIS models, and explores the nature of the bibliographic universe. The bibliographic universe can be considered to be a social object that exists because it is inscribed in catalogue records, cataloguing codes, bibliographies, and other bibliographic tools. It is a socially constituted phenomenon.
    Content
    A thesis submitted for the degree of Doctor of Philosophy at Monash University in 2017 Faculty of Information Technology. Vgl.: https://figshare.com/articles/Models_of_the_bibliographic_universe/5216347.
    Imprint
    Melbourne : Monash University; Faculty of Information Technology

Types

  • a 184
  • s 11
  • x 7
  • r 5
  • m 2
  • n 2
  • b 1
  • i 1
  • More… Less…