Search (189 results, page 1 of 10)

  • × language_ss:"e"
  • × type_ss:"a"
  • × type_ss:"el"
  • × year_i:[2010 TO 2020}
  1. Guidi, F.; Sacerdoti Coen, C.: ¬A survey on retrieval of mathematical knowledge (2015) 0.03
    0.030340679 = product of:
      0.060681358 = sum of:
        0.060681358 = sum of:
          0.009593598 = weight(_text_:a in 5865) [ClassicSimilarity], result of:
            0.009593598 = score(doc=5865,freq=6.0), product of:
              0.043477926 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.037706986 = queryNorm
              0.22065444 = fieldWeight in 5865, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.078125 = fieldNorm(doc=5865)
          0.05108776 = weight(_text_:22 in 5865) [ClassicSimilarity], result of:
            0.05108776 = score(doc=5865,freq=2.0), product of:
              0.13204344 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.037706986 = queryNorm
              0.38690117 = fieldWeight in 5865, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=5865)
      0.5 = coord(1/2)
    
    Abstract
    We present a short survey of the literature on indexing and retrieval of mathematical knowledge, with pointers to 72 papers and tentative taxonomies of both retrieval problems and recurring techniques.
    Date
    22. 2.2017 12:51:57
    Type
    a
  2. Sojka, P.; Liska, M.: ¬The art of mathematics retrieval (2011) 0.03
    0.029622 = product of:
      0.059244 = sum of:
        0.059244 = sum of:
          0.008669697 = weight(_text_:a in 3450) [ClassicSimilarity], result of:
            0.008669697 = score(doc=3450,freq=10.0), product of:
              0.043477926 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.037706986 = queryNorm
              0.19940455 = fieldWeight in 3450, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3450)
          0.050574303 = weight(_text_:22 in 3450) [ClassicSimilarity], result of:
            0.050574303 = score(doc=3450,freq=4.0), product of:
              0.13204344 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.037706986 = queryNorm
              0.38301262 = fieldWeight in 3450, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3450)
      0.5 = coord(1/2)
    
    Abstract
    The design and architecture of MIaS (Math Indexer and Searcher), a system for mathematics retrieval is presented, and design decisions are discussed. We argue for an approach based on Presentation MathML using a similarity of math subformulae. The system was implemented as a math-aware search engine based on the state-ofthe-art system Apache Lucene. Scalability issues were checked against more than 400,000 arXiv documents with 158 million mathematical formulae. Almost three billion MathML subformulae were indexed using a Solr-compatible Lucene.
    Content
    Vgl.: DocEng2011, September 19-22, 2011, Mountain View, California, USA Copyright 2011 ACM 978-1-4503-0863-2/11/09
    Date
    22. 2.2017 13:00:42
    Type
    a
  3. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.02
    0.024024643 = product of:
      0.048049286 = sum of:
        0.048049286 = sum of:
          0.0046998835 = weight(_text_:a in 1967) [ClassicSimilarity], result of:
            0.0046998835 = score(doc=1967,freq=4.0), product of:
              0.043477926 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.037706986 = queryNorm
              0.10809815 = fieldWeight in 1967, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=1967)
          0.043349404 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
            0.043349404 = score(doc=1967,freq=4.0), product of:
              0.13204344 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.037706986 = queryNorm
              0.32829654 = fieldWeight in 1967, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1967)
      0.5 = coord(1/2)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
    Type
    a
  4. Bensman, S.J.: Eugene Garfield, Francis Narin, and PageRank : the theoretical bases of the Google search engine (2013) 0.02
    0.02356836 = product of:
      0.04713672 = sum of:
        0.04713672 = sum of:
          0.0062665115 = weight(_text_:a in 1149) [ClassicSimilarity], result of:
            0.0062665115 = score(doc=1149,freq=4.0), product of:
              0.043477926 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.037706986 = queryNorm
              0.14413087 = fieldWeight in 1149, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0625 = fieldNorm(doc=1149)
          0.04087021 = weight(_text_:22 in 1149) [ClassicSimilarity], result of:
            0.04087021 = score(doc=1149,freq=2.0), product of:
              0.13204344 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.037706986 = queryNorm
              0.30952093 = fieldWeight in 1149, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1149)
      0.5 = coord(1/2)
    
    Abstract
    This paper presents a test of the validity of using Google Scholar to evaluate the publications of researchers by comparing the premises on which its search engine, PageRank, is based, to those of Garfield's theory of citation indexing. It finds that the premises are identical and that PageRank and Garfield's theory of citation indexing validate each other.
    Date
    17.12.2013 11:02:22
    Type
    a
  5. Zanibbi, R.; Yuan, B.: Keyword and image-based retrieval for mathematical expressions (2011) 0.02
    0.019041913 = product of:
      0.038083825 = sum of:
        0.038083825 = sum of:
          0.0074311686 = weight(_text_:a in 3449) [ClassicSimilarity], result of:
            0.0074311686 = score(doc=3449,freq=10.0), product of:
              0.043477926 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.037706986 = queryNorm
              0.1709182 = fieldWeight in 3449, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=3449)
          0.030652655 = weight(_text_:22 in 3449) [ClassicSimilarity], result of:
            0.030652655 = score(doc=3449,freq=2.0), product of:
              0.13204344 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.037706986 = queryNorm
              0.23214069 = fieldWeight in 3449, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3449)
      0.5 = coord(1/2)
    
    Abstract
    Two new methods for retrieving mathematical expressions using conventional keyword search and expression images are presented. An expression-level TF-IDF (term frequency-inverse document frequency) approach is used for keyword search, where queries and indexed expressions are represented by keywords taken from LATEX strings. TF-IDF is computed at the level of individual expressions rather than documents to increase the precision of matching. The second retrieval technique is a form of Content-Base Image Retrieval (CBIR). Expressions are segmented into connected components, and then components in the query expression and each expression in the collection are matched using contour and density features, aspect ratios, and relative positions. In an experiment using ten randomly sampled queries from a corpus of over 22,000 expressions, precision-at-k (k= 20) for the keyword-based approach was higher (keyword: µ= 84.0,s= 19.0, image-based:µ= 32.0,s= 30.7), but for a few of the queries better results were obtained using a combination of the two techniques.
    Date
    22. 2.2017 12:53:49
    Type
    a
  6. Delsey, T.: ¬The Making of RDA (2016) 0.02
    0.018204406 = product of:
      0.03640881 = sum of:
        0.03640881 = sum of:
          0.0057561584 = weight(_text_:a in 2946) [ClassicSimilarity], result of:
            0.0057561584 = score(doc=2946,freq=6.0), product of:
              0.043477926 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.037706986 = queryNorm
              0.13239266 = fieldWeight in 2946, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=2946)
          0.030652655 = weight(_text_:22 in 2946) [ClassicSimilarity], result of:
            0.030652655 = score(doc=2946,freq=2.0), product of:
              0.13204344 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.037706986 = queryNorm
              0.23214069 = fieldWeight in 2946, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2946)
      0.5 = coord(1/2)
    
    Abstract
    The author revisits the development of RDA from its inception in 2005 through to its initial release in 2010. The development effort is set in the context of an evolving digital environment that was transforming both the production and dissemination of information resources and the technologies used to create, store, and access data describing those resources. The author examines the interplay between strategic commitments to align RDA with new conceptual models, emerging database structures, and metadata developments in allied communities, on the one hand, and compatibility with AACR2 legacy databases on the other. Aspects of the development effort examined include the structuring of RDA as a resource description language, organizing the new standard as a working tool, and refining guidelines and instructions for recording RDA data.
    Date
    17. 5.2016 19:22:40
    Type
    a
  7. Voß, J.: Classification of knowledge organization systems with Wikidata (2016) 0.02
    0.01767627 = product of:
      0.03535254 = sum of:
        0.03535254 = sum of:
          0.0046998835 = weight(_text_:a in 3082) [ClassicSimilarity], result of:
            0.0046998835 = score(doc=3082,freq=4.0), product of:
              0.043477926 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.037706986 = queryNorm
              0.10809815 = fieldWeight in 3082, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=3082)
          0.030652655 = weight(_text_:22 in 3082) [ClassicSimilarity], result of:
            0.030652655 = score(doc=3082,freq=2.0), product of:
              0.13204344 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.037706986 = queryNorm
              0.23214069 = fieldWeight in 3082, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3082)
      0.5 = coord(1/2)
    
    Abstract
    This paper presents a crowd-sourced classification of knowledge organization systems based on open knowledge base Wikidata. The focus is less on the current result in its rather preliminary form but on the environment and process of categorization in Wikidata and the extraction of KOS from the collaborative database. Benefits and disadvantages are summarized and discussed for application to knowledge organization of other subject areas with Wikidata.
    Pages
    S.15-22
    Type
    a
  8. Roy, W.; Gray, C.: Preparing existing metadata for repository batch import : a recipe for a fickle food (2018) 0.02
    0.01692609 = product of:
      0.03385218 = sum of:
        0.03385218 = sum of:
          0.008308299 = weight(_text_:a in 4550) [ClassicSimilarity], result of:
            0.008308299 = score(doc=4550,freq=18.0), product of:
              0.043477926 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.037706986 = queryNorm
              0.19109234 = fieldWeight in 4550, product of:
                4.2426405 = tf(freq=18.0), with freq of:
                  18.0 = termFreq=18.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4550)
          0.02554388 = weight(_text_:22 in 4550) [ClassicSimilarity], result of:
            0.02554388 = score(doc=4550,freq=2.0), product of:
              0.13204344 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.037706986 = queryNorm
              0.19345059 = fieldWeight in 4550, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4550)
      0.5 = coord(1/2)
    
    Abstract
    In 2016, the University of Waterloo began offering a mediated copyright review and deposit service to support the growth of our institutional repository UWSpace. This resulted in the need to batch import large lists of published works into the institutional repository quickly and accurately. A range of methods have been proposed for harvesting publications metadata en masse, but many technological solutions can easily become detached from a workflow that is both reproducible for support staff and applicable to a range of situations. Many repositories offer the capacity for batch upload via CSV, so our method provides a template Python script that leverages the Habanero library for populating CSV files with existing metadata retrieved from the CrossRef API. In our case, we have combined this with useful metadata contained in a TSV file downloaded from Web of Science in order to enrich our metadata as well. The appeal of this 'low-maintenance' method is that it provides more robust options for gathering metadata semi-automatically, and only requires the user's ability to access Web of Science and the Python program, while still remaining flexible enough for local customizations.
    Date
    10.11.2018 16:27:22
    Type
    a
  9. Dowding, H.; Gengenbach, M.; Graham, B.; Meister, S.; Moran, J.; Peltzman, S.; Seifert, J.; Waugh, D.: OSS4EVA: using open-source tools to fulfill digital preservation requirements (2016) 0.02
    0.016163789 = product of:
      0.032327577 = sum of:
        0.032327577 = sum of:
          0.0067836978 = weight(_text_:a in 3200) [ClassicSimilarity], result of:
            0.0067836978 = score(doc=3200,freq=12.0), product of:
              0.043477926 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.037706986 = queryNorm
              0.15602624 = fieldWeight in 3200, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3200)
          0.02554388 = weight(_text_:22 in 3200) [ClassicSimilarity], result of:
            0.02554388 = score(doc=3200,freq=2.0), product of:
              0.13204344 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.037706986 = queryNorm
              0.19345059 = fieldWeight in 3200, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3200)
      0.5 = coord(1/2)
    
    Abstract
    This paper builds on the findings of a workshop held at the 2015 International Conference on Digital Preservation (iPRES), entitled, "Using Open-Source Tools to Fulfill Digital Preservation Requirements" (OSS4PRES hereafter). This day-long workshop brought together participants from across the library and archives community, including practitioners proprietary vendors, and representatives from open-source projects. The resulting conversations were surprisingly revealing: while OSS' significance within the preservation landscape was made clear, participants noted that there are a number of roadblocks that discourage or altogether prevent its use in many organizations. Overcoming these challenges will be necessary to further widespread, sustainable OSS adoption within the digital preservation community. This article will mine the rich discussions that took place at OSS4PRES to (1) summarize the workshop's key themes and major points of debate, (2) provide a comprehensive analysis of the opportunities, gaps, and challenges that using OSS entails at a philosophical, institutional, and individual level, and (3) offer a tangible set of recommendations for future work designed to broaden community engagement and enhance the sustainability of open source initiatives, drawing on both participants' experience as well as additional research.
    Date
    28.10.2016 18:22:33
    Type
    a
  10. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.02
    0.015541373 = product of:
      0.031082746 = sum of:
        0.031082746 = sum of:
          0.0055388655 = weight(_text_:a in 4553) [ClassicSimilarity], result of:
            0.0055388655 = score(doc=4553,freq=8.0), product of:
              0.043477926 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.037706986 = queryNorm
              0.12739488 = fieldWeight in 4553, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4553)
          0.02554388 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
            0.02554388 = score(doc=4553,freq=2.0), product of:
              0.13204344 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.037706986 = queryNorm
              0.19345059 = fieldWeight in 4553, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4553)
      0.5 = coord(1/2)
    
    Abstract
    Semantic Web knowledge representation standards, and in particular RDF and OWL, often come endowed with a formal semantics which is considered to be of fundamental importance for the field. Reasoning, i.e., the drawing of logical inferences from knowledge expressed in such standards, is traditionally based on logical deductive methods and algorithms which can be proven to be sound and complete and terminating, i.e. correct in a very strong sense. For various reasons, though, in particular the scalability issues arising from the ever increasing amounts of Semantic Web data available and the inability of deductive algorithms to deal with noise in the data, it has been argued that alternative means of reasoning should be investigated which bear high promise for high scalability and better robustness. From this perspective, deductive algorithms can be considered the gold standard regarding correctness against which alternative methods need to be tested. In this paper, we show that it is possible to train a Deep Learning system on RDF knowledge graphs, such that it is able to perform reasoning over new RDF knowledge graphs, with high precision and recall compared to the deductive gold standard.
    Date
    16.11.2018 14:22:01
    Type
    a
  11. Somers, J.: Torching the modern-day library of Alexandria : somewhere at Google there is a database containing 25 million books and nobody is allowed to read them. (2017) 0.01
    0.014507939 = product of:
      0.029015878 = sum of:
        0.029015878 = sum of:
          0.008580774 = weight(_text_:a in 3608) [ClassicSimilarity], result of:
            0.008580774 = score(doc=3608,freq=30.0), product of:
              0.043477926 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.037706986 = queryNorm
              0.19735932 = fieldWeight in 3608, product of:
                5.477226 = tf(freq=30.0), with freq of:
                  30.0 = termFreq=30.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03125 = fieldNorm(doc=3608)
          0.020435104 = weight(_text_:22 in 3608) [ClassicSimilarity], result of:
            0.020435104 = score(doc=3608,freq=2.0), product of:
              0.13204344 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.037706986 = queryNorm
              0.15476047 = fieldWeight in 3608, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=3608)
      0.5 = coord(1/2)
    
    Abstract
    You were going to get one-click access to the full text of nearly every book that's ever been published. Books still in print you'd have to pay for, but everything else-a collection slated to grow larger than the holdings at the Library of Congress, Harvard, the University of Michigan, at any of the great national libraries of Europe-would have been available for free at terminals that were going to be placed in every local library that wanted one. At the terminal you were going to be able to search tens of millions of books and read every page of any book you found. You'd be able to highlight passages and make annotations and share them; for the first time, you'd be able to pinpoint an idea somewhere inside the vastness of the printed record, and send somebody straight to it with a link. Books would become as instantly available, searchable, copy-pasteable-as alive in the digital world-as web pages. It was to be the realization of a long-held dream. "The universal library has been talked about for millennia," Richard Ovenden, the head of Oxford's Bodleian Libraries, has said. "It was possible to think in the Renaissance that you might be able to amass the whole of published knowledge in a single room or a single institution." In the spring of 2011, it seemed we'd amassed it in a terminal small enough to fit on a desk. "This is a watershed event and can serve as a catalyst for the reinvention of education, research, and intellectual life," one eager observer wrote at the time. On March 22 of that year, however, the legal agreement that would have unlocked a century's worth of books and peppered the country with access terminals to a universal library was rejected under Rule 23(e)(2) of the Federal Rules of Civil Procedure by the U.S. District Court for the Southern District of New York. When the library at Alexandria burned it was said to be an "international catastrophe." When the most significant humanities project of our time was dismantled in court, the scholars, archivists, and librarians who'd had a hand in its undoing breathed a sigh of relief, for they believed, at the time, that they had narrowly averted disaster.
    Type
    a
  12. Denton, W.: On dentographs, a new method of visualizing library collections (2012) 0.00
    0.0031332558 = product of:
      0.0062665115 = sum of:
        0.0062665115 = product of:
          0.012533023 = sum of:
            0.012533023 = weight(_text_:a in 580) [ClassicSimilarity], result of:
              0.012533023 = score(doc=580,freq=16.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.28826174 = fieldWeight in 580, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=580)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A dentograph is a visualization of a library's collection built on the idea that a classification scheme is a mathematical function mapping one set of things (books or the universe of knowledge) onto another (a set of numbers and letters). Dentographs can visualize aspects of just one collection or can be used to compare two or more collections. This article describes how to build them, with examples and code using Ruby and R, and discusses some problems and future directions.
    Type
    a
  13. Academic publishing : No peeking (2014) 0.00
    0.0027415988 = product of:
      0.0054831975 = sum of:
        0.0054831975 = product of:
          0.010966395 = sum of:
            0.010966395 = weight(_text_:a in 805) [ClassicSimilarity], result of:
              0.010966395 = score(doc=805,freq=4.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.25222903 = fieldWeight in 805, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=805)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A publishing giant goes after the authors of its journals' papers
    Type
    a
  14. Harnett, K.: Machine learning confronts the elephant in the room : a visual prank exposes an Achilles' heel of computer vision systems: Unlike humans, they can't do a double take (2018) 0.00
    0.0026563467 = product of:
      0.0053126933 = sum of:
        0.0053126933 = product of:
          0.010625387 = sum of:
            0.010625387 = weight(_text_:a in 4449) [ClassicSimilarity], result of:
              0.010625387 = score(doc=4449,freq=46.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.24438578 = fieldWeight in 4449, product of:
                  6.78233 = tf(freq=46.0), with freq of:
                    46.0 = termFreq=46.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4449)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In a new study, computer scientists found that artificial intelligence systems fail a vision test a child could accomplish with ease. "It's a clever and important study that reminds us that 'deep learning' isn't really that deep," said Gary Marcus , a neuroscientist at New York University who was not affiliated with the work. The result takes place in the field of computer vision, where artificial intelligence systems attempt to detect and categorize objects. They might try to find all the pedestrians in a street scene, or just distinguish a bird from a bicycle (which is a notoriously difficult task). The stakes are high: As computers take over critical tasks like automated surveillance and autonomous driving, we'll want their visual processing to be at least as good as the human eyes they're replacing. It won't be easy. The new work accentuates the sophistication of human vision - and the challenge of building systems that mimic it. In the study, the researchers presented a computer vision system with a living room scene. The system processed it well. It correctly identified a chair, a person, books on a shelf. Then the researchers introduced an anomalous object into the scene - an image of elephant. The elephant's mere presence caused the system to forget itself: Suddenly it started calling a chair a couch and the elephant a chair, while turning completely blind to other objects it had previously seen. Researchers are still trying to understand exactly why computer vision systems get tripped up so easily, but they have a good guess. It has to do with an ability humans have that AI lacks: the ability to understand when a scene is confusing and thus go back for a second glance.
    Type
    a
  15. Kiela, D.; Clark, S.: Detecting compositionality of multi-word expressions using nearest neighbours in vector space models (2013) 0.00
    0.002477056 = product of:
      0.004954112 = sum of:
        0.004954112 = product of:
          0.009908224 = sum of:
            0.009908224 = weight(_text_:a in 1161) [ClassicSimilarity], result of:
              0.009908224 = score(doc=1161,freq=10.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.22789092 = fieldWeight in 1161, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1161)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We present a novel unsupervised approach to detecting the compositionality of multi-word expressions. We compute the compositionality of a phrase through substituting the constituent words with their "neighbours" in a semantic vector space and averaging over the distance between the original phrase and the substituted neighbour phrases. Several methods of obtaining neighbours are presented. The results are compared to existing supervised results and achieve state-of-the-art performance on a verb-object dataset of human compositionality ratings.
    Type
    a
  16. Schreiber, M.: Restricting the h-index to a citation time window : a case study of a timed Hirsch index (2014) 0.00
    0.002477056 = product of:
      0.004954112 = sum of:
        0.004954112 = product of:
          0.009908224 = sum of:
            0.009908224 = weight(_text_:a in 1563) [ClassicSimilarity], result of:
              0.009908224 = score(doc=1563,freq=10.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.22789092 = fieldWeight in 1563, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1563)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The h-index has been shown to increase in many cases mostly because of citations to rather old publications. This inertia can be circumvented by restricting the evaluation to a citation time window. Here I report results of an empirical study analyzing the evolution of the thus defined timed h-index in dependence on the length of the citation time window.
    Type
    a
  17. Gödert, W.; Lepsky, K.: Reception of externalized knowledge : a constructivistic model based on Popper's Three Worlds and Searle's Collective Intentionality (2019) 0.00
    0.002477056 = product of:
      0.004954112 = sum of:
        0.004954112 = product of:
          0.009908224 = sum of:
            0.009908224 = weight(_text_:a in 5205) [ClassicSimilarity], result of:
              0.009908224 = score(doc=5205,freq=10.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.22789092 = fieldWeight in 5205, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5205)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We provide a model for the reception of knowledge from externalized information sources. The model is based on a cognitive understanding of information processing and draws up ideas of an exchange of information in communication processes. Karl Popper's three-world theory with its orientation on falsifiable scientific knowledge is extended by John Searle's concept of collective intentionality. This allows a consistent description of externalization and reception of knowledge including scientific knowledge as well as everyday knowledge.
    Type
    a
  18. McGrath, K.; Kules, B.; Fitzpatrick, C.: FRBR and facets provide flexible, work-centric access to items in library collections (2011) 0.00
    0.002374294 = product of:
      0.004748588 = sum of:
        0.004748588 = product of:
          0.009497176 = sum of:
            0.009497176 = weight(_text_:a in 2430) [ClassicSimilarity], result of:
              0.009497176 = score(doc=2430,freq=12.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.21843673 = fieldWeight in 2430, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2430)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper explores a technique to improve searcher access to library collections by providing a faceted search interface built on a data model based on the Functional Requirements for Bibliographic Records (FRBR). The prototype provides a Workcentric view of a moving image collection that is integrated with bibliographic and holdings data. Two sets of facets address important user needs: "what do you want?" and "how/where do you want it?" enabling patrons to narrow, broaden and pivot across facet values instead of limiting them to the tree-structured hierarchy common with existing FRBR applications. The data model illustrates how FRBR is being adapted and applied beyond the traditional library catalog.
    Type
    a
  19. Fallaw, C.; Dunham, E.; Wickes, E.; Strong, D.; Stein, A.; Zhang, Q.; Rimkus, K.; ill Ingram, B.; Imker, H.J.: Overly honest data repository development (2016) 0.00
    0.002374294 = product of:
      0.004748588 = sum of:
        0.004748588 = product of:
          0.009497176 = sum of:
            0.009497176 = weight(_text_:a in 3371) [ClassicSimilarity], result of:
              0.009497176 = score(doc=3371,freq=12.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.21843673 = fieldWeight in 3371, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3371)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    After a year of development, the library at the University of Illinois at Urbana-Champaign has launched a repository, called the Illinois Data Bank (https://databank.illinois.edu/), to provide Illinois researchers with a free, self-serve publishing platform that centralizes, preserves, and provides persistent and reliable access to Illinois research data. This article presents a holistic view of development by discussing our overarching technical, policy, and interface strategies. By openly presenting our design decisions, the rationales behind those decisions, and associated challenges this paper aims to contribute to the library community's work to develop repository services that meet growing data preservation and sharing needs.
    Type
    a
  20. Zolyomi, A.; Tennis, J.T.: Autism prism : a domain analysis examining neurodiversity (2017) 0.00
    0.002374294 = product of:
      0.004748588 = sum of:
        0.004748588 = product of:
          0.009497176 = sum of:
            0.009497176 = weight(_text_:a in 3864) [ClassicSimilarity], result of:
              0.009497176 = score(doc=3864,freq=12.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.21843673 = fieldWeight in 3864, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3864)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Autism is a complex neurological phenomenon that affects our society on individual, community, and cultural levels. There is an ongoing dialog between the medical, scientific and autism communities that critiques and molds the meaning of autism. The prevailing social model perspective, the neurodiversity paradigm, views autism as a natural variation in human neurology. Towards the goal of crystallizing the various facets of autism, this paper conducts a domain analysis of neurodiversity. Through this analysis, we explore the dynamics between diagnosis, identity, power, and inclusion.
    Type
    a