Search (91 results, page 1 of 5)

  • × language_ss:"e"
  • × type_ss:"a"
  • × type_ss:"el"
  • × year_i:[2010 TO 2020}
  1. Zanibbi, R.; Yuan, B.: Keyword and image-based retrieval for mathematical expressions (2011) 0.02
    0.021275483 = product of:
      0.042550966 = sum of:
        0.042550966 = sum of:
          0.0051048263 = weight(_text_:s in 3449) [ClassicSimilarity], result of:
            0.0051048263 = score(doc=3449,freq=4.0), product of:
              0.05008241 = queryWeight, product of:
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.046063907 = queryNorm
              0.101928525 = fieldWeight in 3449, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.046875 = fieldNorm(doc=3449)
          0.03744614 = weight(_text_:22 in 3449) [ClassicSimilarity], result of:
            0.03744614 = score(doc=3449,freq=2.0), product of:
              0.16130796 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046063907 = queryNorm
              0.23214069 = fieldWeight in 3449, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3449)
      0.5 = coord(1/2)
    
    Abstract
    Two new methods for retrieving mathematical expressions using conventional keyword search and expression images are presented. An expression-level TF-IDF (term frequency-inverse document frequency) approach is used for keyword search, where queries and indexed expressions are represented by keywords taken from LATEX strings. TF-IDF is computed at the level of individual expressions rather than documents to increase the precision of matching. The second retrieval technique is a form of Content-Base Image Retrieval (CBIR). Expressions are segmented into connected components, and then components in the query expression and each expression in the collection are matched using contour and density features, aspect ratios, and relative positions. In an experiment using ten randomly sampled queries from a corpus of over 22,000 expressions, precision-at-k (k= 20) for the keyword-based approach was higher (keyword: µ= 84.0,s= 19.0, image-based:µ= 32.0,s= 30.7), but for a few of the queries better results were obtained using a combination of the two techniques.
    Date
    22. 2.2017 12:53:49
  2. Delsey, T.: ¬The Making of RDA (2016) 0.02
    0.0205279 = product of:
      0.0410558 = sum of:
        0.0410558 = sum of:
          0.003609657 = weight(_text_:s in 2946) [ClassicSimilarity], result of:
            0.003609657 = score(doc=2946,freq=2.0), product of:
              0.05008241 = queryWeight, product of:
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.046063907 = queryNorm
              0.072074346 = fieldWeight in 2946, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.046875 = fieldNorm(doc=2946)
          0.03744614 = weight(_text_:22 in 2946) [ClassicSimilarity], result of:
            0.03744614 = score(doc=2946,freq=2.0), product of:
              0.16130796 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046063907 = queryNorm
              0.23214069 = fieldWeight in 2946, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2946)
      0.5 = coord(1/2)
    
    Date
    17. 5.2016 19:22:40
    Source
    Jlis.it. 7(2016) no.2, S.25-47
  3. Voß, J.: Classification of knowledge organization systems with Wikidata (2016) 0.02
    0.0205279 = product of:
      0.0410558 = sum of:
        0.0410558 = sum of:
          0.003609657 = weight(_text_:s in 3082) [ClassicSimilarity], result of:
            0.003609657 = score(doc=3082,freq=2.0), product of:
              0.05008241 = queryWeight, product of:
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.046063907 = queryNorm
              0.072074346 = fieldWeight in 3082, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.046875 = fieldNorm(doc=3082)
          0.03744614 = weight(_text_:22 in 3082) [ClassicSimilarity], result of:
            0.03744614 = score(doc=3082,freq=2.0), product of:
              0.16130796 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046063907 = queryNorm
              0.23214069 = fieldWeight in 3082, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3082)
      0.5 = coord(1/2)
    
    Pages
    S.15-22
  4. Dowding, H.; Gengenbach, M.; Graham, B.; Meister, S.; Moran, J.; Peltzman, S.; Seifert, J.; Waugh, D.: OSS4EVA: using open-source tools to fulfill digital preservation requirements (2016) 0.02
    0.017729571 = product of:
      0.035459142 = sum of:
        0.035459142 = sum of:
          0.0042540217 = weight(_text_:s in 3200) [ClassicSimilarity], result of:
            0.0042540217 = score(doc=3200,freq=4.0), product of:
              0.05008241 = queryWeight, product of:
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.046063907 = queryNorm
              0.08494043 = fieldWeight in 3200, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3200)
          0.03120512 = weight(_text_:22 in 3200) [ClassicSimilarity], result of:
            0.03120512 = score(doc=3200,freq=2.0), product of:
              0.16130796 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046063907 = queryNorm
              0.19345059 = fieldWeight in 3200, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3200)
      0.5 = coord(1/2)
    
    Date
    28.10.2016 18:22:33
  5. Guidi, F.; Sacerdoti Coen, C.: ¬A survey on retrieval of mathematical knowledge (2015) 0.02
    0.01560256 = product of:
      0.03120512 = sum of:
        0.03120512 = product of:
          0.06241024 = sum of:
            0.06241024 = weight(_text_:22 in 5865) [ClassicSimilarity], result of:
              0.06241024 = score(doc=5865,freq=2.0), product of:
                0.16130796 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046063907 = queryNorm
                0.38690117 = fieldWeight in 5865, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5865)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 2.2017 12:51:57
  6. Sojka, P.; Liska, M.: ¬The art of mathematics retrieval (2011) 0.02
    0.0154457465 = product of:
      0.030891493 = sum of:
        0.030891493 = product of:
          0.061782986 = sum of:
            0.061782986 = weight(_text_:22 in 3450) [ClassicSimilarity], result of:
              0.061782986 = score(doc=3450,freq=4.0), product of:
                0.16130796 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046063907 = queryNorm
                0.38301262 = fieldWeight in 3450, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3450)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Vgl.: DocEng2011, September 19-22, 2011, Mountain View, California, USA Copyright 2011 ACM 978-1-4503-0863-2/11/09
    Date
    22. 2.2017 13:00:42
  7. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.01
    0.013239211 = product of:
      0.026478423 = sum of:
        0.026478423 = product of:
          0.052956846 = sum of:
            0.052956846 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
              0.052956846 = score(doc=1967,freq=4.0), product of:
                0.16130796 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046063907 = queryNorm
                0.32829654 = fieldWeight in 1967, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1967)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
  8. Bensman, S.J.: Eugene Garfield, Francis Narin, and PageRank : the theoretical bases of the Google search engine (2013) 0.01
    0.012482047 = product of:
      0.024964094 = sum of:
        0.024964094 = product of:
          0.04992819 = sum of:
            0.04992819 = weight(_text_:22 in 1149) [ClassicSimilarity], result of:
              0.04992819 = score(doc=1149,freq=2.0), product of:
                0.16130796 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046063907 = queryNorm
                0.30952093 = fieldWeight in 1149, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1149)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    17.12.2013 11:02:22
  9. Roy, W.; Gray, C.: Preparing existing metadata for repository batch import : a recipe for a fickle food (2018) 0.01
    0.00780128 = product of:
      0.01560256 = sum of:
        0.01560256 = product of:
          0.03120512 = sum of:
            0.03120512 = weight(_text_:22 in 4550) [ClassicSimilarity], result of:
              0.03120512 = score(doc=4550,freq=2.0), product of:
                0.16130796 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046063907 = queryNorm
                0.19345059 = fieldWeight in 4550, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4550)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    10.11.2018 16:27:22
  10. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.01
    0.00780128 = product of:
      0.01560256 = sum of:
        0.01560256 = product of:
          0.03120512 = sum of:
            0.03120512 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
              0.03120512 = score(doc=4553,freq=2.0), product of:
                0.16130796 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046063907 = queryNorm
                0.19345059 = fieldWeight in 4553, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4553)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    16.11.2018 14:22:01
  11. Somers, J.: Torching the modern-day library of Alexandria : somewhere at Google there is a database containing 25 million books and nobody is allowed to read them. (2017) 0.01
    0.0062410235 = product of:
      0.012482047 = sum of:
        0.012482047 = product of:
          0.024964094 = sum of:
            0.024964094 = weight(_text_:22 in 3608) [ClassicSimilarity], result of:
              0.024964094 = score(doc=3608,freq=2.0), product of:
                0.16130796 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046063907 = queryNorm
                0.15476047 = fieldWeight in 3608, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3608)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    You were going to get one-click access to the full text of nearly every book that's ever been published. Books still in print you'd have to pay for, but everything else-a collection slated to grow larger than the holdings at the Library of Congress, Harvard, the University of Michigan, at any of the great national libraries of Europe-would have been available for free at terminals that were going to be placed in every local library that wanted one. At the terminal you were going to be able to search tens of millions of books and read every page of any book you found. You'd be able to highlight passages and make annotations and share them; for the first time, you'd be able to pinpoint an idea somewhere inside the vastness of the printed record, and send somebody straight to it with a link. Books would become as instantly available, searchable, copy-pasteable-as alive in the digital world-as web pages. It was to be the realization of a long-held dream. "The universal library has been talked about for millennia," Richard Ovenden, the head of Oxford's Bodleian Libraries, has said. "It was possible to think in the Renaissance that you might be able to amass the whole of published knowledge in a single room or a single institution." In the spring of 2011, it seemed we'd amassed it in a terminal small enough to fit on a desk. "This is a watershed event and can serve as a catalyst for the reinvention of education, research, and intellectual life," one eager observer wrote at the time. On March 22 of that year, however, the legal agreement that would have unlocked a century's worth of books and peppered the country with access terminals to a universal library was rejected under Rule 23(e)(2) of the Federal Rules of Civil Procedure by the U.S. District Court for the Southern District of New York. When the library at Alexandria burned it was said to be an "international catastrophe." When the most significant humanities project of our time was dismantled in court, the scholars, archivists, and librarians who'd had a hand in its undoing breathed a sigh of relief, for they believed, at the time, that they had narrowly averted disaster.
  12. Aytac, S.; Slutsky, B.: Published librarian research, 2008 through 2012 : analyses and perspectives (2014) 0.00
    0.002605046 = product of:
      0.005210092 = sum of:
        0.005210092 = product of:
          0.010420184 = sum of:
            0.010420184 = weight(_text_:s in 2507) [ClassicSimilarity], result of:
              0.010420184 = score(doc=2507,freq=6.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.20806074 = fieldWeight in 2507, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2507)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Vgl. dazu auch: Coates, H.L.: Library and information science research literature is chiefly descriptive and relies heavily on survey and content analysis methods. In: Evidence based library and information practice. 10/2015) no.4, S.215-217.
    Source
    Collaborative librarianship. 6(2014) no.4, S.147-159
  13. Kara, S.: ¬An ontology-based retrieval system using semantic indexing (2012) 0.00
    0.0015630275 = product of:
      0.003126055 = sum of:
        0.003126055 = product of:
          0.00625211 = sum of:
            0.00625211 = weight(_text_:s in 3829) [ClassicSimilarity], result of:
              0.00625211 = score(doc=3829,freq=6.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.124836445 = fieldWeight in 3829, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3829)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Thesis submitted to the Graduate School of Natural and Applied Sciences of Middle East Technical University in partial fulfilment of the requirements for the degree of Master of science in Computer Engineering (XII, 57 S.)
    Source
    Information Systems. 37(2012) no. 4, S.294-305
  14. Stoykova, V.; Petkova, E.: Automatic extraction of mathematical terms for precalculus (2012) 0.00
    0.0014889077 = product of:
      0.0029778155 = sum of:
        0.0029778155 = product of:
          0.005955631 = sum of:
            0.005955631 = weight(_text_:s in 156) [ClassicSimilarity], result of:
              0.005955631 = score(doc=156,freq=4.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.118916616 = fieldWeight in 156, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=156)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this work, we present the results of research for evaluating a methodology for extracting mathematical terms for precalculus using the techniques for semantically-oriented statistical search. We use the corpus-based approach and the combination of different statistically-based techniques for extracting keywords, collocations and co-occurrences incorporated in the Sketch Engine software. We evaluate the collocations candidate terms for the basic concept function(s) and approve the related methodology by precalculus domain conceptual terms definitions. Finally, we offer a conceptual terms hierarchical representation and discuss the results with respect to their possible applications.
    Source
    Procedia Technology. 1(2012), S.464-468
  15. Wang, S.; Isaac, A.; Schlobach, S.; Meij, L. van der; Schopman, B.: Instance-based semantic interoperability in the cultural heritage (2012) 0.00
    0.001302523 = product of:
      0.002605046 = sum of:
        0.002605046 = product of:
          0.005210092 = sum of:
            0.005210092 = weight(_text_:s in 125) [ClassicSimilarity], result of:
              0.005210092 = score(doc=125,freq=6.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.10403037 = fieldWeight in 125, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=125)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Semantic Web journal. 3(2012) no.1, S.45-64
  16. Rajasurya, S.; Muralidharan, T.; Devi, S.; Swamynathan, S.: Semantic information retrieval using ontology in university domain (2012) 0.00
    0.001302523 = product of:
      0.002605046 = sum of:
        0.002605046 = product of:
          0.005210092 = sum of:
            0.005210092 = weight(_text_:s in 2861) [ClassicSimilarity], result of:
              0.005210092 = score(doc=2861,freq=6.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.10403037 = fieldWeight in 2861, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2861)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  17. Hyning, V. Van; Lintott, C.; Blickhan, S.; Trouille, L.: Transforming libraries and archives through crowdsourcing (2017) 0.00
    0.0012762066 = product of:
      0.0025524131 = sum of:
        0.0025524131 = product of:
          0.0051048263 = sum of:
            0.0051048263 = weight(_text_:s in 2526) [ClassicSimilarity], result of:
              0.0051048263 = score(doc=2526,freq=4.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.101928525 = fieldWeight in 2526, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2526)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    D-Lib magazine. 23(2017) nos.5/6, xx S
  18. Schulz, S.; Schober, D.; Tudose, I.; Stenzhorn, H.: ¬The pitfalls of thesaurus ontologization : the case of the NCI thesaurus (2010) 0.00
    0.0012762066 = product of:
      0.0025524131 = sum of:
        0.0025524131 = product of:
          0.0051048263 = sum of:
            0.0051048263 = weight(_text_:s in 4885) [ClassicSimilarity], result of:
              0.0051048263 = score(doc=4885,freq=4.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.101928525 = fieldWeight in 4885, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4885)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    S.727-731
  19. Zadeh, B.Q.; Handschuh, S.: ¬The ACL RD-TEC : a dataset for benchmarking terminology extraction and classification in computational linguistics (2014) 0.00
    0.0012762066 = product of:
      0.0025524131 = sum of:
        0.0025524131 = product of:
          0.0051048263 = sum of:
            0.0051048263 = weight(_text_:s in 2803) [ClassicSimilarity], result of:
              0.0051048263 = score(doc=2803,freq=4.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.101928525 = fieldWeight in 2803, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2803)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    S.52-63
  20. Gorman, M.: Revisiting enduring values (2015) 0.00
    0.0012762066 = product of:
      0.0025524131 = sum of:
        0.0025524131 = product of:
          0.0051048263 = sum of:
            0.0051048263 = weight(_text_:s in 2943) [ClassicSimilarity], result of:
              0.0051048263 = score(doc=2943,freq=4.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.101928525 = fieldWeight in 2943, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2943)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Bezugnahme auf das Buch: Gorman, M.: Our enduring values: librarianship in the 21st century. Chicago [u.a.] ; American Library Ass. ; 2000 ; IX, 188 S.
    Source
    Jlis.it. 6(2015) no.2, S.13-33