Search (135 results, page 1 of 7)

  • × type_ss:"a"
  • × year_i:[2020 TO 2030}
  • × language_ss:"e"
  1. Tay, A.: ¬The next generation discovery citation indexes : a review of the landscape in 2020 (2020) 0.05
    0.04654435 = product of:
      0.0930887 = sum of:
        0.0930887 = sum of:
          0.049993843 = weight(_text_:i in 40) [ClassicSimilarity], result of:
            0.049993843 = score(doc=40,freq=2.0), product of:
              0.17138503 = queryWeight, product of:
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.045439374 = queryNorm
              0.29170483 = fieldWeight in 40, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.0546875 = fieldNorm(doc=40)
          0.043094855 = weight(_text_:22 in 40) [ClassicSimilarity], result of:
            0.043094855 = score(doc=40,freq=2.0), product of:
              0.15912095 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045439374 = queryNorm
              0.2708308 = fieldWeight in 40, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=40)
      0.5 = coord(1/2)
    
    Abstract
    Conclusion There is a reason why Google Scholar and Web of Science/Scopus are kings of the hills in their various arenas. They have strong brand recogniton, a head start in development and a mass of eyeballs and users that leads to an almost virtious cycle of improvement. Competing against such well established competitors is not easy even when one has deep pockets (Microsoft) or a killer idea (scite). It will be interesting to see how the landscape will look like in 2030. Stay tuned for part II where I review each particular index.
    Date
    17.11.2020 12:22:59
  2. Wu, Z.; Li, R.; Zhou, Z.; Guo, J.; Jiang, J.; Su, X.: ¬A user sensitive subject protection approach for book search service (2020) 0.03
    0.033245962 = product of:
      0.066491924 = sum of:
        0.066491924 = sum of:
          0.035709884 = weight(_text_:i in 5617) [ClassicSimilarity], result of:
            0.035709884 = score(doc=5617,freq=2.0), product of:
              0.17138503 = queryWeight, product of:
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.045439374 = queryNorm
              0.20836058 = fieldWeight in 5617, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5617)
          0.03078204 = weight(_text_:22 in 5617) [ClassicSimilarity], result of:
            0.03078204 = score(doc=5617,freq=2.0), product of:
              0.15912095 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045439374 = queryNorm
              0.19345059 = fieldWeight in 5617, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5617)
      0.5 = coord(1/2)
    
    Abstract
    In a digital library, book search is one of the most important information services. However, with the rapid development of network technologies such as cloud computing, the server-side of a digital library is becoming more and more untrusted; thus, how to prevent the disclosure of users' book query privacy is causing people's increasingly extensive concern. In this article, we propose to construct a group of plausible fake queries for each user book query to cover up the sensitive subjects behind users' queries. First, we propose a basic framework for the privacy protection in book search, which requires no change to the book search algorithm running on the server-side, and no compromise to the accuracy of book search. Second, we present a privacy protection model for book search to formulate the constraints that ideal fake queries should satisfy, that is, (i) the feature similarity, which measures the confusion effect of fake queries on users' queries, and (ii) the privacy exposure, which measures the cover-up effect of fake queries on users' sensitive subjects. Third, we discuss the algorithm implementation for the privacy model. Finally, the effectiveness of our approach is demonstrated by theoretical analysis and experimental evaluation.
    Date
    6. 1.2020 17:22:25
  3. Cerda-Cosme, R.; Méndez, E.: Analysis of shared research data in Spanish scientific papers about COVID-19 : a first approach (2023) 0.03
    0.033245962 = product of:
      0.066491924 = sum of:
        0.066491924 = sum of:
          0.035709884 = weight(_text_:i in 916) [ClassicSimilarity], result of:
            0.035709884 = score(doc=916,freq=2.0), product of:
              0.17138503 = queryWeight, product of:
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.045439374 = queryNorm
              0.20836058 = fieldWeight in 916, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.0390625 = fieldNorm(doc=916)
          0.03078204 = weight(_text_:22 in 916) [ClassicSimilarity], result of:
            0.03078204 = score(doc=916,freq=2.0), product of:
              0.15912095 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045439374 = queryNorm
              0.19345059 = fieldWeight in 916, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=916)
      0.5 = coord(1/2)
    
    Abstract
    During the coronavirus pandemic, changes in the way science is done and shared occurred, which motivates meta-research to help understand science communication in crises and improve its effectiveness. The objective is to study how many Spanish scientific papers on COVID-19 published during 2020 share their research data. Qualitative and descriptive study applying nine attributes: (a) availability, (b) accessibility, (c) format, (d) licensing, (e) linkage, (f) funding, (g) editorial policy, (h) content, and (i) statistics. We analyzed 1,340 papers, 1,173 (87.5%) did not have research data. A total of 12.5% share their research data of which 2.1% share their data in repositories, 5% share their data through a simple request, 0.2% do not have permission to share their data, and 5.2% share their data as supplementary material. There is a small percentage that shares their research data; however, it demonstrates the researchers' poor knowledge on how to properly share their research data and their lack of knowledge on what is research data.
    Date
    21. 3.2023 19:22:02
  4. Belabbes, M.A.; Ruthven, I.; Moshfeghi, Y.; Rasmussen Pennington, D.: Information overload : a concept analysis (2023) 0.03
    0.033245962 = product of:
      0.066491924 = sum of:
        0.066491924 = sum of:
          0.035709884 = weight(_text_:i in 950) [ClassicSimilarity], result of:
            0.035709884 = score(doc=950,freq=2.0), product of:
              0.17138503 = queryWeight, product of:
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.045439374 = queryNorm
              0.20836058 = fieldWeight in 950, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.0390625 = fieldNorm(doc=950)
          0.03078204 = weight(_text_:22 in 950) [ClassicSimilarity], result of:
            0.03078204 = score(doc=950,freq=2.0), product of:
              0.15912095 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045439374 = queryNorm
              0.19345059 = fieldWeight in 950, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=950)
      0.5 = coord(1/2)
    
    Date
    22. 4.2023 19:27:56
  5. Ruthven, I.: Resonance and the experience of relevance (2021) 0.03
    0.02794741 = product of:
      0.05589482 = sum of:
        0.05589482 = product of:
          0.11178964 = sum of:
            0.11178964 = weight(_text_:i in 211) [ClassicSimilarity], result of:
              0.11178964 = score(doc=211,freq=10.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.65227187 = fieldWeight in 211, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=211)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this article, I propose the concept of resonance as a useful one for describing what it means to experience relevance. Based on an extensive interdisciplinary review, I provide a novel framework that presents resonance as a spectrum of experience with a multitude of outcomes ranging from a sense of harmony and coherence to life transformation. I argue that resonance has different properties to the more traditional interpretation of relevance and provides a better system of explanation of what it means to experience relevance. I show how traditional approaches to relevance and resonance work in a complementary fashion and outline how resonance may present distinct new lines of research into relevance theory.
  6. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.03
    0.027063664 = product of:
      0.054127328 = sum of:
        0.054127328 = product of:
          0.21650931 = sum of:
            0.21650931 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.21650931 = score(doc=862,freq=2.0), product of:
                0.38523552 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.045439374 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  7. Chessum, K.; Haiming, L.; Frommholz, I.: ¬A study of search user interface design based on Hofstede's six cultural dimensions (2022) 0.02
    0.021425933 = product of:
      0.042851865 = sum of:
        0.042851865 = product of:
          0.08570373 = sum of:
            0.08570373 = weight(_text_:i in 856) [ClassicSimilarity], result of:
              0.08570373 = score(doc=856,freq=2.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.50006545 = fieldWeight in 856, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.09375 = fieldNorm(doc=856)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  8. Martin, K.: Predatory predictions and the ethics of predictive analytics (2023) 0.02
    0.017854942 = product of:
      0.035709884 = sum of:
        0.035709884 = product of:
          0.07141977 = sum of:
            0.07141977 = weight(_text_:i in 946) [ClassicSimilarity], result of:
              0.07141977 = score(doc=946,freq=8.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.41672117 = fieldWeight in 946, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=946)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this paper, I critically examine ethical issues introduced by predictive analytics. I argue firms can have a market incentive to construct deceptively inflated true-positive outcomes: individuals are over-categorized as requiring a penalizing treatment and the treatment leads to mistakenly thinking this label was correct. I show that differences in power between firms developing and using predictive analytics compared to subjects can lead to firms reaping the benefits of predatory predictions while subjects can bear the brunt of the costs. While profitable, the use of predatory predictions can deceive stakeholders by inflating the measurement of accuracy, diminish the individuality of subjects, and exert arbitrary power. I then argue that firms have a responsibility to distinguish between the treatment effect and predictive power of the predictive analytics program, better internalize the costs of categorizing someone as needing a penalizing treatment, and justify the predictions of subjects and general use of predictive analytics. Subjecting individuals to predatory predictions only for a firms' efficiency and benefit is unethical and an arbitrary exertion of power. Firms developing and deploying a predictive analytics program can benefit from constructing predatory predictions while the cost is borne by the less powerful subjects of the program.
  9. Fugmann, R.: What is information? : an information veteran looks back (2022) 0.02
    0.01539102 = product of:
      0.03078204 = sum of:
        0.03078204 = product of:
          0.06156408 = sum of:
            0.06156408 = weight(_text_:22 in 1085) [ClassicSimilarity], result of:
              0.06156408 = score(doc=1085,freq=2.0), product of:
                0.15912095 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045439374 = queryNorm
                0.38690117 = fieldWeight in 1085, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1085)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    18. 8.2022 19:22:57
  10. Manzoni, L.: Nuovo Soggettario and semantic indexing of cartographic resources in Italy : an exploratory study (2022) 0.01
    0.014283955 = product of:
      0.02856791 = sum of:
        0.02856791 = product of:
          0.05713582 = sum of:
            0.05713582 = weight(_text_:i in 1138) [ClassicSimilarity], result of:
              0.05713582 = score(doc=1138,freq=2.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.33337694 = fieldWeight in 1138, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1138)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Location
    I
  11. Parapar, J.; Losada, D.E.; Presedo-Quindimil, M.A.; Barreiro, A.: Using score distributions to compare statistical significance tests for information retrieval evaluation (2020) 0.01
    0.012625352 = product of:
      0.025250703 = sum of:
        0.025250703 = product of:
          0.050501406 = sum of:
            0.050501406 = weight(_text_:i in 5506) [ClassicSimilarity], result of:
              0.050501406 = score(doc=5506,freq=4.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.29466638 = fieldWeight in 5506, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5506)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Statistical significance tests can provide evidence that the observed difference in performance between 2 methods is not due to chance. In information retrieval (IR), some studies have examined the validity and suitability of such tests for comparing search systems. We argue here that current methods for assessing the reliability of statistical tests suffer from some methodological weaknesses, and we propose a novel way to study significance tests for retrieval evaluation. Using Score Distributions, we model the output of multiple search systems, produce simulated search results from such models, and compare them using various significance tests. A key strength of this approach is that we assess statistical tests under perfect knowledge about the truth or falseness of the null hypothesis. This new method for studying the power of significance tests in IR evaluation is formal and innovative. Following this type of analysis, we found that both the sign test and Wilcoxon signed test have more power than the permutation test and the t-test. The sign test and Wilcoxon signed test also have good behavior in terms of type I errors. The bootstrap test shows few type I errors, but it has less power than the other methods tested.
  12. Koster, L.: Persistent identifiers for heritage objects (2020) 0.01
    0.012625352 = product of:
      0.025250703 = sum of:
        0.025250703 = product of:
          0.050501406 = sum of:
            0.050501406 = weight(_text_:i in 5718) [ClassicSimilarity], result of:
              0.050501406 = score(doc=5718,freq=4.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.29466638 = fieldWeight in 5718, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5718)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Persistent identifiers (PID's) are essential for getting access and referring to library, archive and museum (LAM) collection objects in a sustainable and unambiguous way, both internally and externally. Heritage institutions need a universal policy for the use of PID's in order to have an efficient digital infrastructure at their disposal and to achieve optimal interoperability, leading to open data, open collections and efficient resource management. Here the discussion is limited to PID's that institutions can assign to objects they own or administer themselves. PID's for people, subjects etc. can be used by heritage institutions, but are generally managed by other parties. The first part of this article consists of a general theoretical description of persistent identifiers. First of all, I discuss the questions of what persistent identifiers are and what they are not, and what is needed to administer and use them. The most commonly used existing PID systems are briefly characterized. Then I discuss the types of objects PID's can be assigned to. This section concludes with an overview of the requirements that apply if PIDs should also be used for linked data. The second part examines current infrastructural practices, and existing PID systems and their advantages and shortcomings. Based on these practical issues and the pros and cons of existing PID systems a list of requirements for PID systems is presented which is used to address a number of practical considerations. This section concludes with a number of recommendations.
  13. Patriarca, S.: Information literacy gives us the tools to check sources and to verify factual statements : What does Popper`s "Es gibt keine Autoritäten" mean? (2021) 0.01
    0.012625352 = product of:
      0.025250703 = sum of:
        0.025250703 = product of:
          0.050501406 = sum of:
            0.050501406 = weight(_text_:i in 331) [ClassicSimilarity], result of:
              0.050501406 = score(doc=331,freq=4.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.29466638 = fieldWeight in 331, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=331)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    I wonder if you would consider an English perspective on the exchange between Bernd Jörs and Hermann Huemer. In my career in the independent education sector I can recall many discussions and Government reports about cross-curricular issues such as logical reasoning and critical thinking, In the IB system this led to the inclusion in the Diploma of "Theory of Knowledge." In the UK we had "key skills" and "critical thinking." One such key skill is what we now call "information literacy." "In his parody of Information literacy, Dr Jörs seems to have confused a necessary condition for a sufficient condition. The fact that information competence may be necessary for serious academic study does not of course make it sufficient. When that is understood the joke about the megalomaniac rather loses its force. (We had better pass over the rant which follows, the sneer at "earth sciences" and the German prejudice towards Austrians)."
  14. Prokop, M.: Hans Jonas and the phenomenological continuity of life and mind (2022) 0.01
    0.012625352 = product of:
      0.025250703 = sum of:
        0.025250703 = product of:
          0.050501406 = sum of:
            0.050501406 = weight(_text_:i in 1048) [ClassicSimilarity], result of:
              0.050501406 = score(doc=1048,freq=4.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.29466638 = fieldWeight in 1048, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1048)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper offers a novel interpretation of Hans Jonas' analysis of metabolism, the centrepiece of Jonas' philosophy of organism, in relation to recent controversies regarding the phenomenological dimension of life-mind continuity as understood within 'autopoietic' enactivism (AE). Jonas' philosophy of organism chiefly inspired AE's development of what we might call 'the phenomenological life-mind continuity thesis' (PLMCT), the claim that certain phenomenological features of human experience are central to a proper scientific understanding of both life and mind, and as such central features of all living organisms. After discussing the understanding of PLMCT within AE, and recent criticisms thereof, I develop a reading of Jonas' analysis of metabolism, in light of previous commentators, which emphasizes its systematicity and transcendental flavour. The central thought is that, for Jonas, the attribution of certain phenomenological features is a necessary precondition for our understanding of the possibility of metabolism, rather than being derivable from metabolism itself. I argue that my interpretation strengthens Jonas' contribution to AE's justification for ascribing certain phenomenological features to life across the board. However, it also emphasises the need to complement Jonas' analysis with an explanatory account of organic identity in order to vindicate these phenomenological ascriptions in a scientific context.
  15. Preminger, M.; Rype, I.; Ådland, M.K.; Massey, D.; Tallerås, K.: ¬The public library metadata landscape : the case of Norway 2017-2018 (2020) 0.01
    0.012498461 = product of:
      0.024996921 = sum of:
        0.024996921 = product of:
          0.049993843 = sum of:
            0.049993843 = weight(_text_:i in 5802) [ClassicSimilarity], result of:
              0.049993843 = score(doc=5802,freq=2.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.29170483 = fieldWeight in 5802, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5802)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  16. Samples, J.; Bigelow, I.: MARC to BIBFRAME : converting the PCC to Linked Data (2020) 0.01
    0.012498461 = product of:
      0.024996921 = sum of:
        0.024996921 = product of:
          0.049993843 = sum of:
            0.049993843 = weight(_text_:i in 119) [ClassicSimilarity], result of:
              0.049993843 = score(doc=119,freq=2.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.29170483 = fieldWeight in 119, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=119)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  17. Ruthven, I.: ¬An information behavior theory of transitions (2022) 0.01
    0.012498461 = product of:
      0.024996921 = sum of:
        0.024996921 = product of:
          0.049993843 = sum of:
            0.049993843 = weight(_text_:i in 530) [ClassicSimilarity], result of:
              0.049993843 = score(doc=530,freq=2.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.29170483 = fieldWeight in 530, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=530)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  18. Dagher, I.; Soufi, D.: Authority control of Arabic psonal names : RDA and beyond (2021) 0.01
    0.012498461 = product of:
      0.024996921 = sum of:
        0.024996921 = product of:
          0.049993843 = sum of:
            0.049993843 = weight(_text_:i in 707) [ClassicSimilarity], result of:
              0.049993843 = score(doc=707,freq=2.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.29170483 = fieldWeight in 707, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=707)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  19. Adler, M.: ¬The strangeness of subject cataloging : afterword (2020) 0.01
    0.012370269 = product of:
      0.024740538 = sum of:
        0.024740538 = product of:
          0.049481075 = sum of:
            0.049481075 = weight(_text_:i in 5887) [ClassicSimilarity], result of:
              0.049481075 = score(doc=5887,freq=6.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.28871292 = fieldWeight in 5887, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5887)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    "I can't presume to know how other catalogers view the systems, information resources, and institutions with which they engage on a daily basis. David Paton gives us a glimpse in this issue of the affective experiences of bibliographers and catalogers of artists' books in South Africa, and it is clear that the emotional range among them is wide. What I can say is that catalogers' feelings and worldviews, whatever they may be, give the library its shape. I think we can agree that the librarians who constructed the Library of Congress Classification around 1900, Melvil Dewey, and the many classifiers around the world past and present, have had particular sets of desires around control and access and order. We all are asked to submit to those desires in our library work, as well as our own pursuit of knowledge and pleasure reading. And every decision regarding the aboutness of a book, or about where to place it within a particular discipline, takes place in a cataloger's affective and experiential world. While the classification provides the outlines, the catalogers color in the spaces with the books, based on their own readings of the book descriptions and their interpretations of the classification scheme. The decisions they make and the structures to which they are bound affect the circulation of books and their readers across the library. Indeed, some of the encounters will be unexpected, strange, frustrating, frightening, shame-inducing, awe-inspiring, and/or delightful. The emotional experiences of students described in Mabee and Fancher's article, as well as those of any visitor to the library, are all affected by classificatory design. One concern is that a library's ordering principles may reinforce or heighten already existing feelings of precarity or marginality. Because the classifications are hidden from patrons' view, it is difficult to measure the way the order affects a person's mind and body. That a person does not consciously register the associations does not mean that they are not affected."
  20. Morris, V.: Automated language identification of bibliographic resources (2020) 0.01
    0.0123128155 = product of:
      0.024625631 = sum of:
        0.024625631 = product of:
          0.049251262 = sum of:
            0.049251262 = weight(_text_:22 in 5749) [ClassicSimilarity], result of:
              0.049251262 = score(doc=5749,freq=2.0), product of:
                0.15912095 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045439374 = queryNorm
                0.30952093 = fieldWeight in 5749, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5749)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    2. 3.2020 19:04:22

Types