Search (775 results, page 1 of 39)

  • × year_i:[2020 TO 2030}
  1. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.29
    0.29003748 = product of:
      0.5220674 = sum of:
        0.051274054 = product of:
          0.15382215 = sum of:
            0.15382215 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.15382215 = score(doc=1000,freq=2.0), product of:
                0.32843533 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.038739666 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.15382215 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.15382215 = score(doc=1000,freq=2.0), product of:
            0.32843533 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038739666 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.009326885 = weight(_text_:information in 1000) [ClassicSimilarity], result of:
          0.009326885 = score(doc=1000,freq=4.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.13714671 = fieldWeight in 1000, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.15382215 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.15382215 = score(doc=1000,freq=2.0), product of:
            0.32843533 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038739666 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.15382215 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.15382215 = score(doc=1000,freq=2.0), product of:
            0.32843533 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038739666 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.5555556 = coord(5/9)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
    Imprint
    Wien / Library and Information Studies : Universität
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.27
    0.2734616 = product of:
      0.6152886 = sum of:
        0.06152886 = product of:
          0.18458658 = sum of:
            0.18458658 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.18458658 = score(doc=862,freq=2.0), product of:
                0.32843533 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.038739666 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.18458658 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.18458658 = score(doc=862,freq=2.0), product of:
            0.32843533 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038739666 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.18458658 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.18458658 = score(doc=862,freq=2.0), product of:
            0.32843533 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038739666 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.18458658 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.18458658 = score(doc=862,freq=2.0), product of:
            0.32843533 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038739666 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.44444445 = coord(4/9)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  3. Rieder, B.: Engines of order : a mechanology of algorithmic techniques (2020) 0.05
    0.052907266 = product of:
      0.15872179 = sum of:
        0.009326885 = weight(_text_:information in 315) [ClassicSimilarity], result of:
          0.009326885 = score(doc=315,freq=4.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.13714671 = fieldWeight in 315, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=315)
        0.11746629 = weight(_text_:techniques in 315) [ClassicSimilarity], result of:
          0.11746629 = score(doc=315,freq=16.0), product of:
            0.17065717 = queryWeight, product of:
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.038739666 = queryNorm
            0.68831736 = fieldWeight in 315, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.0390625 = fieldNorm(doc=315)
        0.031928614 = product of:
          0.06385723 = sum of:
            0.06385723 = weight(_text_:theories in 315) [ClassicSimilarity], result of:
              0.06385723 = score(doc=315,freq=2.0), product of:
                0.21161452 = queryWeight, product of:
                  5.4624767 = idf(docFreq=509, maxDocs=44218)
                  0.038739666 = queryNorm
                0.30176204 = fieldWeight in 315, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4624767 = idf(docFreq=509, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=315)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    Software has become a key component of contemporary life and algorithmic techniques that rank, classify, or recommend anything that fits into digital form are everywhere. This book approaches the field of information ordering conceptually as well as historically. Building on the philosophy of Gilbert Simondon and the cultural techniques tradition, it first examines the constructive and cumulative character of software and shows how software-making constantly draws on large reservoirs of existing knowledge and techniques. It then reconstructs the historical trajectories of a series of algorithmic techniques that have indeed become the building blocks for contemporary practices of ordering. Developed in opposition to centuries of library tradition, coordinate indexing, text processing, machine learning, and network algorithms instantiate dynamic, perspectivist, and interested forms of arranging information, ideas, or people. Embedded in technical infrastructures and economic logics, these techniques have become engines of order that transform the spaces they act upon.
    Content
    Part I -- 1. Engines of Order -- 2. Rethinking Software -- 3. Software-Making and Algorithmic Techniques -- Part II -- 4. From Universal Classification to a Postcoordinated Universe -- 5. From Frequencies to Vectors -- 6. Interested Learning -- 7. Calculating Networks: From Sociometry to PageRank -- Conclusion: Toward Technical Culture Erscheint als Open Access bei De Gruyter.
    Series
    Recursions: theories of media, materiality, and cultural techniques
  4. Aizawa, A.; Kohlhase, M.: Mathematical information retrieval (2021) 0.04
    0.043812923 = product of:
      0.13143876 = sum of:
        0.018466292 = weight(_text_:information in 667) [ClassicSimilarity], result of:
          0.018466292 = score(doc=667,freq=8.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.27153665 = fieldWeight in 667, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=667)
        0.054829627 = weight(_text_:retrieval in 667) [ClassicSimilarity], result of:
          0.054829627 = score(doc=667,freq=8.0), product of:
            0.1171842 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.038739666 = queryNorm
            0.46789268 = fieldWeight in 667, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=667)
        0.05814285 = weight(_text_:techniques in 667) [ClassicSimilarity], result of:
          0.05814285 = score(doc=667,freq=2.0), product of:
            0.17065717 = queryWeight, product of:
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.038739666 = queryNorm
            0.3406997 = fieldWeight in 667, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.0546875 = fieldNorm(doc=667)
      0.33333334 = coord(3/9)
    
    Abstract
    We present an overview of the NTCIR Math Tasks organized during NTCIR-10, 11, and 12. These tasks are primarily dedicated to techniques for searching mathematical content with formula expressions. In this chapter, we first summarize the task design and introduce test collections generated in the tasks. We also describe the features and main challenges of mathematical information retrieval systems and discuss future perspectives in the field.
    Series
    ¬The Information retrieval series, vol 43
    Source
    Evaluating information retrieval and access tasks. Eds.: Sakai, T., Oard, D., Kando, N. [https://doi.org/10.1007/978-981-15-5554-1_12]
  5. MacFarlane, A.; Missaoui, S.; Frankowska-Takhari, S.: On machine learning and knowledge organization in multimedia information retrieval (2020) 0.04
    0.037029125 = product of:
      0.111087374 = sum of:
        0.013190207 = weight(_text_:information in 5732) [ClassicSimilarity], result of:
          0.013190207 = score(doc=5732,freq=8.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.19395474 = fieldWeight in 5732, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5732)
        0.03916402 = weight(_text_:retrieval in 5732) [ClassicSimilarity], result of:
          0.03916402 = score(doc=5732,freq=8.0), product of:
            0.1171842 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.038739666 = queryNorm
            0.33420905 = fieldWeight in 5732, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5732)
        0.058733147 = weight(_text_:techniques in 5732) [ClassicSimilarity], result of:
          0.058733147 = score(doc=5732,freq=4.0), product of:
            0.17065717 = queryWeight, product of:
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.038739666 = queryNorm
            0.34415868 = fieldWeight in 5732, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5732)
      0.33333334 = coord(3/9)
    
    Abstract
    Recent technological developments have increased the use of machine learning to solve many problems, including many in information retrieval. Multimedia information retrieval as a problem represents a significant challenge to machine learning as a technological solution, but some problems can still be addressed by using appropriate AI techniques. We review the technological developments and provide a perspective on the use of machine learning in conjunction with knowledge organization to address multimedia IR needs. The semantic gap in multimedia IR remains a significant problem in the field, and solutions to them are many years off. However, new technological developments allow the use of knowledge organization and machine learning in multimedia search systems and services. Specifically, we argue that, the improvement of detection of some classes of lowlevel features in images music and video can be used in conjunction with knowledge organization to tag or label multimedia content for better retrieval performance. We provide an overview of the use of knowledge organization schemes in machine learning and make recommendations to information professionals on the use of this technology with knowledge organization techniques to solve multimedia IR problems. We introduce a five-step process model that extracts features from multimedia objects (Step 1) from both knowledge organization (Step 1a) and machine learning (Step 1b), merging them together (Step 2) to create an index of those multimedia objects (Step 3). We also overview further steps in creating an application to utilize the multimedia objects (Step 4) and maintaining and updating the database of features on those objects (Step 5).
  6. Hammache, A.; Boughanem, M.: Term position-based language model for information retrieval (2021) 0.04
    0.036317717 = product of:
      0.10895315 = sum of:
        0.009326885 = weight(_text_:information in 216) [ClassicSimilarity], result of:
          0.009326885 = score(doc=216,freq=4.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.13714671 = fieldWeight in 216, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=216)
        0.027693143 = weight(_text_:retrieval in 216) [ClassicSimilarity], result of:
          0.027693143 = score(doc=216,freq=4.0), product of:
            0.1171842 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.038739666 = queryNorm
            0.23632148 = fieldWeight in 216, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=216)
        0.07193312 = weight(_text_:techniques in 216) [ClassicSimilarity], result of:
          0.07193312 = score(doc=216,freq=6.0), product of:
            0.17065717 = queryWeight, product of:
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.038739666 = queryNorm
            0.42150658 = fieldWeight in 216, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.0390625 = fieldNorm(doc=216)
      0.33333334 = coord(3/9)
    
    Abstract
    Term position feature is widely and successfully used in IR and Web search engines, to enhance the retrieval effectiveness. This feature is essentially used for two purposes: to capture query terms proximity or to boost the weight of terms appearing in some parts of a document. In this paper, we are interested in this second category. We propose two novel query-independent techniques based on absolute term positions in a document, whose goal is to boost the weight of terms appearing in the beginning of a document. The first one considers only the earliest occurrence of a term in a document. The second one takes into account all term positions in a document. We formalize each of these two techniques as a document model based on term position, and then we incorporate it into a basic language model (LM). Two smoothing techniques, Dirichlet and Jelinek-Mercer, are considered in the basic LM. Experiments conducted on three TREC test collections show that our model, especially the version based on all term positions, achieves significant improvements over the baseline LMs, and it also often performs better than two state-of-the-art baseline models, the chronological term rank model and the Markov random field model.
    Source
    Journal of the Association for Information Science and Technology. 72(2021) no.5, S.627-642
  7. Zeynali-Tazehkandi, M.; Nowkarizi, M.: ¬ A dialectical approach to search engine evaluation (2020) 0.04
    0.03545514 = product of:
      0.10636542 = sum of:
        0.01582825 = weight(_text_:information in 185) [ClassicSimilarity], result of:
          0.01582825 = score(doc=185,freq=8.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.23274569 = fieldWeight in 185, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=185)
        0.040700447 = weight(_text_:retrieval in 185) [ClassicSimilarity], result of:
          0.040700447 = score(doc=185,freq=6.0), product of:
            0.1171842 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.038739666 = queryNorm
            0.34732026 = fieldWeight in 185, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=185)
        0.049836725 = weight(_text_:techniques in 185) [ClassicSimilarity], result of:
          0.049836725 = score(doc=185,freq=2.0), product of:
            0.17065717 = queryWeight, product of:
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.038739666 = queryNorm
            0.2920283 = fieldWeight in 185, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.046875 = fieldNorm(doc=185)
      0.33333334 = coord(3/9)
    
    Abstract
    Evaluation of information retrieval systems is a fundamental topic in Library and Information Science. The aim of this paper is to connect the system-oriented and the user-oriented approaches to relevant philosophical schools. By reviewing the related literature, it was found that the evaluation of information retrieval systems is successful if it benefits from both system-oriented and user-oriented approaches (composite). The system-oriented approach is rooted in Parmenides' philosophy of stability (immovable) which Plato accepts and attributes to the world of forms; the user-oriented approach is rooted in Heraclitus' flux philosophy (motion) which Plato defers and attributes to the tangible world. Thus, using Plato's theory is a comprehensive approach for recognizing the concept of relevance. The theoretical and philosophical foundations determine the type of research methods and techniques. Therefore, Plato's dialectical method is an appropriate composite method for evaluating information retrieval systems.
  8. Hjoerland, B.: Information (2023) 0.03
    0.034699813 = product of:
      0.10409944 = sum of:
        0.031984556 = weight(_text_:information in 1118) [ClassicSimilarity], result of:
          0.031984556 = score(doc=1118,freq=24.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.47031528 = fieldWeight in 1118, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1118)
        0.027414814 = weight(_text_:retrieval in 1118) [ClassicSimilarity], result of:
          0.027414814 = score(doc=1118,freq=2.0), product of:
            0.1171842 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.038739666 = queryNorm
            0.23394634 = fieldWeight in 1118, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1118)
        0.044700064 = product of:
          0.08940013 = sum of:
            0.08940013 = weight(_text_:theories in 1118) [ClassicSimilarity], result of:
              0.08940013 = score(doc=1118,freq=2.0), product of:
                0.21161452 = queryWeight, product of:
                  5.4624767 = idf(docFreq=509, maxDocs=44218)
                  0.038739666 = queryNorm
                0.42246687 = fieldWeight in 1118, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4624767 = idf(docFreq=509, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1118)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    This article presents a brief history of the term "information" and its different meanings, which are both important and difficult because the different meanings of the term imply whole theories of knowledge. The article further considers the relation between "information" and the concepts "matter and energy", "data", "sign and meaning", "knowledge" and "communication". It presents and analyses the influence of information in information studies and knowledge organization and contains a presentation and critical analysis of some compound terms such as "information need", "information overload" and "information retrieval", which illuminate the use of the term information in information studies. An appendix provides a chronological list of definitions of information.
    Theme
    Information
  9. Morris, V.: Automated language identification of bibliographic resources (2020) 0.03
    0.0326653 = product of:
      0.0979959 = sum of:
        0.010552166 = weight(_text_:information in 5749) [ClassicSimilarity], result of:
          0.010552166 = score(doc=5749,freq=2.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.1551638 = fieldWeight in 5749, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=5749)
        0.06644897 = weight(_text_:techniques in 5749) [ClassicSimilarity], result of:
          0.06644897 = score(doc=5749,freq=2.0), product of:
            0.17065717 = queryWeight, product of:
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.038739666 = queryNorm
            0.3893711 = fieldWeight in 5749, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.0625 = fieldNorm(doc=5749)
        0.02099476 = product of:
          0.04198952 = sum of:
            0.04198952 = weight(_text_:22 in 5749) [ClassicSimilarity], result of:
              0.04198952 = score(doc=5749,freq=2.0), product of:
                0.13565971 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038739666 = queryNorm
                0.30952093 = fieldWeight in 5749, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5749)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    This article describes experiments in the use of machine learning techniques at the British Library to assign language codes to catalog records, in order to provide information about the language of content of the resources described. In the first phase of the project, language codes were assigned to 1.15 million records with 99.7% confidence. The automated language identification tools developed will be used to contribute to future enhancement of over 4 million legacy records.
    Date
    2. 3.2020 19:04:22
  10. Bergman, O.; Israeli, T.; Whittaker, S.: Factors hindering shared files retrieval (2020) 0.03
    0.029930858 = product of:
      0.08979257 = sum of:
        0.0147471 = weight(_text_:information in 5843) [ClassicSimilarity], result of:
          0.0147471 = score(doc=5843,freq=10.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.21684799 = fieldWeight in 5843, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5843)
        0.06192375 = weight(_text_:retrieval in 5843) [ClassicSimilarity], result of:
          0.06192375 = score(doc=5843,freq=20.0), product of:
            0.1171842 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.038739666 = queryNorm
            0.5284309 = fieldWeight in 5843, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5843)
        0.013121725 = product of:
          0.02624345 = sum of:
            0.02624345 = weight(_text_:22 in 5843) [ClassicSimilarity], result of:
              0.02624345 = score(doc=5843,freq=2.0), product of:
                0.13565971 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038739666 = queryNorm
                0.19345059 = fieldWeight in 5843, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5843)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    Purpose Personal information management (PIM) is an activity in which people store information items in order to retrieve them later. The purpose of this paper is to test and quantify the effect of factors related to collection size, file properties and workload on file retrieval success and efficiency. Design/methodology/approach In the study, 289 participants retrieved 1,557 of their shared files in a naturalistic setting. The study used specially developed software designed to collect shared files' names and present them as targets for the retrieval task. The dependent variables were retrieval success, retrieval time and misstep/s. Findings Various factors compromise shared files retrieval including: collection size (large number of files), file properties (multiple versions, size of team sharing the file, time since most recent retrieval and folder depth) and workload (daily e-mails sent and received). The authors discuss theoretical reasons for these negative effects and suggest possible ways to overcome them. Originality/value Retrieval is the main reason people manage personal information. It is essential for retrieval to be successful and efficient, as information cannot be used unless it can be re-accessed. Prior PIM research has assumed that factors related to collection size, file properties and workload affect file retrieval. However, this is the first study to systematically quantify the negative effects of these factors. As each of these factors is expected to be exacerbated in the future, this study is a necessary first step toward addressing these problems.
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 72(2020) no.1, S.130-147
  11. Singh, V.K.; Ghosh, I.; Sonagara, D.: Detecting fake news stories via multimodal analysis (2021) 0.03
    0.029402107 = product of:
      0.08820632 = sum of:
        0.0147471 = weight(_text_:information in 88) [ClassicSimilarity], result of:
          0.0147471 = score(doc=88,freq=10.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.21684799 = fieldWeight in 88, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=88)
        0.04153061 = weight(_text_:techniques in 88) [ClassicSimilarity], result of:
          0.04153061 = score(doc=88,freq=2.0), product of:
            0.17065717 = queryWeight, product of:
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.038739666 = queryNorm
            0.24335694 = fieldWeight in 88, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.0390625 = fieldNorm(doc=88)
        0.031928614 = product of:
          0.06385723 = sum of:
            0.06385723 = weight(_text_:theories in 88) [ClassicSimilarity], result of:
              0.06385723 = score(doc=88,freq=2.0), product of:
                0.21161452 = queryWeight, product of:
                  5.4624767 = idf(docFreq=509, maxDocs=44218)
                  0.038739666 = queryNorm
                0.30176204 = fieldWeight in 88, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4624767 = idf(docFreq=509, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=88)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    Filtering, vetting, and verifying digital information is an area of core interest in information science. Online fake news is a specific type of digital misinformation that poses serious threats to democratic institutions, misguides the public, and can lead to radicalization and violence. Hence, fake news detection is an important problem for information science research. While there have been multiple attempts to identify fake news, most of such efforts have focused on a single modality (e.g., only text-based or only visual features). However, news articles are increasingly framed as multimodal news stories, and hence, in this work, we propose a multimodal approach combining text and visual analysis of online news stories to automatically detect fake news. Drawing on key theories of information processing and presentation, we identify multiple text and visual features that are associated with fake or credible news articles. We then perform a predictive analysis to detect features most strongly associated with fake news. Next, we combine these features in predictive models using multiple machine-learning techniques. The experimental results indicate that a multimodal approach outperforms single-modality approaches, allowing for better fake news detection.
    Source
    Journal of the Association for Information Science and Technology. 72(2021) no.1, S.3-17
  12. Guo, T.; Bai, X.; Zhen, S.; Abid, S.; Xia, F.: Lost at starting line : predicting maladaptation of university freshmen based on educational big data (2023) 0.03
    0.029005995 = product of:
      0.08701798 = sum of:
        0.067301154 = weight(_text_:line in 1194) [ClassicSimilarity], result of:
          0.067301154 = score(doc=1194,freq=2.0), product of:
            0.21724595 = queryWeight, product of:
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.038739666 = queryNorm
            0.30979243 = fieldWeight in 1194, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1194)
        0.0065951035 = weight(_text_:information in 1194) [ClassicSimilarity], result of:
          0.0065951035 = score(doc=1194,freq=2.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.09697737 = fieldWeight in 1194, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1194)
        0.013121725 = product of:
          0.02624345 = sum of:
            0.02624345 = weight(_text_:22 in 1194) [ClassicSimilarity], result of:
              0.02624345 = score(doc=1194,freq=2.0), product of:
                0.13565971 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038739666 = queryNorm
                0.19345059 = fieldWeight in 1194, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1194)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Date
    27.12.2022 18:34:22
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.1, S.17-32
  13. Cooke, N.A.; Kitzie, V.L.: Outsiders-within-Library and Information Science : reprioritizing the marginalized in critical sociocultural work (2021) 0.03
    0.027544238 = product of:
      0.123949066 = sum of:
        0.01582825 = weight(_text_:information in 351) [ClassicSimilarity], result of:
          0.01582825 = score(doc=351,freq=8.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.23274569 = fieldWeight in 351, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=351)
        0.108120814 = sum of:
          0.07662868 = weight(_text_:theories in 351) [ClassicSimilarity], result of:
            0.07662868 = score(doc=351,freq=2.0), product of:
              0.21161452 = queryWeight, product of:
                5.4624767 = idf(docFreq=509, maxDocs=44218)
                0.038739666 = queryNorm
              0.36211446 = fieldWeight in 351, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.4624767 = idf(docFreq=509, maxDocs=44218)
                0.046875 = fieldNorm(doc=351)
          0.03149214 = weight(_text_:22 in 351) [ClassicSimilarity], result of:
            0.03149214 = score(doc=351,freq=2.0), product of:
              0.13565971 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.038739666 = queryNorm
              0.23214069 = fieldWeight in 351, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=351)
      0.22222222 = coord(2/9)
    
    Abstract
    While there are calls for new paradigms within the profession, there are also existing subgenres that fit this bill if they would be fully acknowledged. This essay argues that underrepresented and otherwise marginalized scholars have already produced significant work within social, cultural, and community-oriented paradigms; social justice and advocacy; and, diversity, equity, and inclusion. This work has not been sufficiently valued or promoted. Furthermore, the surrounding structural conditions have resulted in the dismissal, violently reviewed and rejected, and erased work of underrepresented scholars, and the stigmatization and delegitimization of their work. These scholars are "outsiders-within-LIS." By identifying the outsiders-within-LIS through the frame of standpoint theories, the authors are suggesting that a new paradigm does not need to be created; rather, an existing paradigm needs to be recognized and reprioritized. This reprioritized paradigm of critical sociocultural work has and will continue to creatively enrich and expand the field and decolonize LIS curricula.
    Date
    18. 9.2021 13:22:27
    Series
    Special issue: Paradigm shift in the field of information
    Source
    Journal of the Association for Information Science and Technology. 72(2021) no.10, S.1285-1294
    Theme
    Information
  14. Das, S.; Naskar, D.; Roy, S.: Reorganizing educational institutional domain using faceted ontological principles (2022) 0.03
    0.025035955 = product of:
      0.075107865 = sum of:
        0.010552166 = weight(_text_:information in 1098) [ClassicSimilarity], result of:
          0.010552166 = score(doc=1098,freq=8.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.1551638 = fieldWeight in 1098, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1098)
        0.031331215 = weight(_text_:retrieval in 1098) [ClassicSimilarity], result of:
          0.031331215 = score(doc=1098,freq=8.0), product of:
            0.1171842 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.038739666 = queryNorm
            0.26736724 = fieldWeight in 1098, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=1098)
        0.033224486 = weight(_text_:techniques in 1098) [ClassicSimilarity], result of:
          0.033224486 = score(doc=1098,freq=2.0), product of:
            0.17065717 = queryWeight, product of:
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.038739666 = queryNorm
            0.19468555 = fieldWeight in 1098, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.03125 = fieldNorm(doc=1098)
      0.33333334 = coord(3/9)
    
    Abstract
    The purpose of this work is to find out how different library classification systems and linguistic ontologies arrange a particular domain of interest and what are the limitations for information retrieval. We use knowledge representation techniques and languages for construction of a domain specific ontology. This ontology would help not only in problem solving, but it would demonstrate the ease with which complex queries can be handled using principles of domain ontology, thereby facilitating better information retrieval. Facet-based methodology has been used for ontology formalization for quite some time. Ontology formalization involves different steps such as, Identification of the terminology, Analysis, Synthesis, Standardization and Ordering. Firstly, for purposes of conceptualization OntoUML has been used which is a well-founded and established language for Ontology driven Conceptual Modelling. Phase transformation of "the same mode" has been subsequently obtained by OWL-DL using Protégé software. The final OWL ontology contains a total of around 232 axioms. These axioms comprise 148 logical axioms, 76 declaration axioms and 43 classes. These axioms glue together classes, properties and data types as well as a constraint. Such data clustering cannot be achieved through general use of simple classification schemes. Hence it has been observed and established that domain ontology using faceted principles provide better information retrieval with enhanced precision. This ontology should be seen not only as an alternative of the existing classification system but as a Knowledge Base (KB) system which can handle complex queries well, which is the ultimate purpose of any classification system or indexing system. In this paper, we try to understand how ontology-based information retrieval systems can prove its utility as a useful tool in the field of library science with a particular focus on the education domain.
  15. Fonseca, F.: Whether or when : the question on the use of theories in data science (2021) 0.02
    0.024997856 = product of:
      0.112490356 = sum of:
        0.0065951035 = weight(_text_:information in 409) [ClassicSimilarity], result of:
          0.0065951035 = score(doc=409,freq=2.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.09697737 = fieldWeight in 409, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=409)
        0.10589525 = product of:
          0.2117905 = sum of:
            0.2117905 = weight(_text_:theories in 409) [ClassicSimilarity], result of:
              0.2117905 = score(doc=409,freq=22.0), product of:
                0.21161452 = queryWeight, product of:
                  5.4624767 = idf(docFreq=509, maxDocs=44218)
                  0.038739666 = queryNorm
                1.0008316 = fieldWeight in 409, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  5.4624767 = idf(docFreq=509, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=409)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Data Science can be considered a technique or a science. As a technique, it is more interested in the "what" than in the "why" of data. It does not need theories that explain how things work, it just needs the results. As a science, however, working strictly from data and without theories contradicts the post-empiricist view of science. In this view, theories come before data and data is used to corroborate or falsify theories. Nevertheless, one of the most controversial statements about Data Science is that it is a science that can work without theories. In this conceptual paper, we focus on the science aspect of Data Science. How is Data Science as a science? We propose a three-phased view of Data Science that shows that different theories have different roles in each of the phases we consider. We focus on when theories are used in Data Science rather than the controversy of whether theories are used in Data Science or not. In the end, we will see that the statement "Data Science works without theories" is better put as "in some of its phases, Data Science works without the theories that originally motivated the creation of the data."
    Source
    Journal of the Association for Information Science and Technology. 72(2021) no.12, S.1593-1604
  16. Ghosh, S.S.; Das, S.; Chatterjee, S.K.: Human-centric faceted approach for ontology construction (2020) 0.02
    0.024767607 = product of:
      0.07430282 = sum of:
        0.013190207 = weight(_text_:information in 5731) [ClassicSimilarity], result of:
          0.013190207 = score(doc=5731,freq=8.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.19395474 = fieldWeight in 5731, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5731)
        0.01958201 = weight(_text_:retrieval in 5731) [ClassicSimilarity], result of:
          0.01958201 = score(doc=5731,freq=2.0), product of:
            0.1171842 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.038739666 = queryNorm
            0.16710453 = fieldWeight in 5731, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5731)
        0.04153061 = weight(_text_:techniques in 5731) [ClassicSimilarity], result of:
          0.04153061 = score(doc=5731,freq=2.0), product of:
            0.17065717 = queryWeight, product of:
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.038739666 = queryNorm
            0.24335694 = fieldWeight in 5731, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5731)
      0.33333334 = coord(3/9)
    
    Abstract
    In this paper, we propose an ontology building method, called human-centric faceted approach for ontology construction (HCFOC). HCFOC uses the human-centric approach, improvised with the idea of selective dissemination of information (SDI), to deal with context. Further, this ontology construction process makes use of facet analysis and an analytico-synthetic classification approach. This novel fusion contributes to the originality of HCFOC and distinguishes it from other existing ontology construction methodologies. Based on HCFOC, an ontology of the tourism domain has been designed using the Protégé-5.5.0 ontology editor. The HCFOC methodology has provided the necessary flexibility, extensibility, robustness and has facilitated the capturing of background knowledge. It models the tourism ontology in such a way that it is able to deal with the context of a tourist's information need with precision. This is evident from the result that more than 90% of the user's queries were successfully met. The use of domain knowledge and techniques from both library and information science and computer science has helped in the realization of the desired purpose of this ontology construction process. It is envisaged that HCFOC will have implications for ontology developers. The demonstrated tourism ontology can support any tourism information retrieval system.
  17. Bosancic, B.: Information, data, and knowledge in the cognitive system of the observer (2020) 0.02
    0.02474063 = product of:
      0.11133284 = sum of:
        0.0951782 = weight(_text_:line in 5972) [ClassicSimilarity], result of:
          0.0951782 = score(doc=5972,freq=4.0), product of:
            0.21724595 = queryWeight, product of:
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.038739666 = queryNorm
            0.43811268 = fieldWeight in 5972, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5972)
        0.01615464 = weight(_text_:information in 5972) [ClassicSimilarity], result of:
          0.01615464 = score(doc=5972,freq=12.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.23754507 = fieldWeight in 5972, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5972)
      0.22222222 = coord(2/9)
    
    Abstract
    Purpose In line with the cognitive viewpoint on the phenomenon of information, the constructivist tradition based on Maturana and Varela's theory of knowing, and some aspects of Shannon's theory of communication, the purpose of this paper is to shed more light on the role of information, data, and knowledge in the cognitive system (domain) of the observer. Design/methodology/approach In addition to the literature review, a proposed description of the communication and knowledge acquisition processes within the observer's cognitive system/domain is elaborated. Findings The paper recognizes communication and knowledge acquisition as separate processes based on two roles of information within the observer's cognitive system, which are emphasized. The first role is connected with the appropriate communication aspects of Shannon's theory related to encoding cognitive entities in the cognitive domain as data representations for calculating their informativeness. The second role involves establishing relations between cognitive entities encoded as data representations through the knowledge acquisition process in the observer's cognitive domain. Originality/value In this way, according to the cognitive viewpoint, communication and knowledge acquisition processes are recognized as important aspects of the cognitive process as a whole. In line with such a theoretical approach, the paper seeks to provide an extension of Shannon's original idea, intending to involve the observer's knowledge structure as an important framework for the deepening of information theory.
    Theme
    Information
  18. Fattahi, R.: Towards developing theories about data : a philosophical and scientific approach (2022) 0.02
    0.023358384 = product of:
      0.105112724 = sum of:
        0.009326885 = weight(_text_:information in 1101) [ClassicSimilarity], result of:
          0.009326885 = score(doc=1101,freq=4.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.13714671 = fieldWeight in 1101, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1101)
        0.09578584 = product of:
          0.19157168 = sum of:
            0.19157168 = weight(_text_:theories in 1101) [ClassicSimilarity], result of:
              0.19157168 = score(doc=1101,freq=18.0), product of:
                0.21161452 = queryWeight, product of:
                  5.4624767 = idf(docFreq=509, maxDocs=44218)
                  0.038739666 = queryNorm
                0.90528613 = fieldWeight in 1101, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  5.4624767 = idf(docFreq=509, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1101)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Similar to information and knowledge, data and especially big data are now known as one of the most vital elements in the 21st century since they provide multiple capabilities to individuals and organizations. However, in comparison to some theories about information and knowledge, there are no significant attempts in most scientific disciplines for building theories about data. This paper first reviews the different definitions provided about the concept of data in the works of scholars. It then identifies and explores the philosophical aspects as well as the multiple capabilities/features that can be derived from data. Finally, a starter list of some basic/general theories is developed based on the capabilities and features of data. Such new theories can be used as meta-theories to extend data theories for various scientific disciplines. The important notion supporting the development of theories about data is that, if data is so important and if data science is to continue flourishing in a variety of specialized fields and trends, then we need to build relevant theories about data for research and practical purposes in a multi-disciplinary context.
  19. MacFarlane, A.; Missaoui, S.; Makri, S.; Gutierrez Lopez, M.: Sender vs. recipient-orientated information systems revisited (2022) 0.02
    0.021795517 = product of:
      0.06538655 = sum of:
        0.024178049 = weight(_text_:information in 607) [ClassicSimilarity], result of:
          0.024178049 = score(doc=607,freq=42.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.3555249 = fieldWeight in 607, product of:
              6.4807405 = tf(freq=42.0), with freq of:
                42.0 = termFreq=42.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=607)
        0.015665608 = weight(_text_:retrieval in 607) [ClassicSimilarity], result of:
          0.015665608 = score(doc=607,freq=2.0), product of:
            0.1171842 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.038739666 = queryNorm
            0.13368362 = fieldWeight in 607, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=607)
        0.025542893 = product of:
          0.051085785 = sum of:
            0.051085785 = weight(_text_:theories in 607) [ClassicSimilarity], result of:
              0.051085785 = score(doc=607,freq=2.0), product of:
                0.21161452 = queryWeight, product of:
                  5.4624767 = idf(docFreq=509, maxDocs=44218)
                  0.038739666 = queryNorm
                0.24140964 = fieldWeight in 607, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4624767 = idf(docFreq=509, maxDocs=44218)
                  0.03125 = fieldNorm(doc=607)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    Purpose Belkin and Robertson (1976a) reflected on the ethical implications of theoretical research in information science and warned that there was potential for abuse of knowledge gained by undertaking such research and applying it to information systems. In particular, they identified the domains of advertising and political propaganda that posed particular problems. The purpose of this literature review is to revisit these ideas in the light of recent events in global information systems that demonstrate that their fears were justified. Design/methodology/approach The authors revisit the theory in information science that Belkin and Robertson used to build their argument, together with the discussion on ethics that resulted from this work in the late 1970s and early 1980s. The authors then review recent literature in the field of information systems, specifically information retrieval, social media and recommendation systems that highlight the problems identified by Belkin and Robertson. Findings Information science theories have been used in conjunction with empirical evidence gathered from user interactions that have been detrimental to both individuals and society. It is argued in the paper that the information science and systems communities should find ways to return control to the user wherever possible, and the ways to achieve this are considered. Research limitations/implications The ethical issues identified require a multidisciplinary approach with research in information science, computer science, information systems, business, sociology, psychology, journalism, government and politics, etc. required. This is too large a scope to deal with in a literature review, and we focus only on the design and implementation of information systems (Zimmer, 2008a) through an information science and information systems perspective. Practical implications The authors argue that information systems such as search technologies, social media applications and recommendation systems should be designed with the recipient of the information in mind (Paisley and Parker, 1965), not the sender of that information. Social implications Information systems designed ethically and with users in mind will go some way to addressing the ill effects typified by the problems for individuals and society evident in global information systems. Originality/value The authors synthesize the evidence from the literature to provide potential technological solutions to the ethical issues identified, with a set of recommendations to information systems designers and implementers.
    Theme
    Information
  20. Zhao, D.; Strotmann, A.: Intellectual structure of information science 2011-2020 : an author co-citation analysis (2022) 0.02
    0.021572782 = product of:
      0.06471834 = sum of:
        0.015828248 = weight(_text_:information in 610) [ClassicSimilarity], result of:
          0.015828248 = score(doc=610,freq=18.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.23274568 = fieldWeight in 610, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=610)
        0.015665608 = weight(_text_:retrieval in 610) [ClassicSimilarity], result of:
          0.015665608 = score(doc=610,freq=2.0), product of:
            0.1171842 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.038739666 = queryNorm
            0.13368362 = fieldWeight in 610, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=610)
        0.033224486 = weight(_text_:techniques in 610) [ClassicSimilarity], result of:
          0.033224486 = score(doc=610,freq=2.0), product of:
            0.17065717 = queryWeight, product of:
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.038739666 = queryNorm
            0.19468555 = fieldWeight in 610, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.03125 = fieldNorm(doc=610)
      0.33333334 = coord(3/9)
    
    Abstract
    Purpose This study continues a long history of author co-citation analysis of the intellectual structure of information science into the time period of 2011-2020. It also examines changes in this structure from 2006-2010 through 2011-2015 to 2016-2020. Results will contribute to a better understanding of the information science research field. Design/methodology/approach The well-established procedures and techniques for author co-citation analysis were followed. Full records of research articles in core information science journals published during 2011-2020 were retrieved and downloaded from the Web of Science database. About 150 most highly cited authors in each of the two five-year time periods were selected from this dataset to represent this field, and their co-citation counts were calculated. Each co-citation matrix was input into SPSS for factor analysis, and results were visualized in Pajek. Factors were interpreted as specialties and labeled upon an examination of articles written by authors who load primarily on each factor. Findings The two-camp structure of information science continued to be present clearly. Bibliometric indicators for research evaluation dominated the Knowledge Domain Analysis camp during both fivr-year time periods, whereas interactive information retrieval (IR) dominated the IR camp during 2011-2015 but shared dominance with information behavior during 2016-2020. Bridging between the two camps became increasingly weaker and was only provided by the scholarly communication specialty during 2016-2020. The IR systems specialty drifted further away from the IR camp. The information behavior specialty experienced a deep slump during 2011-2020 in its evolution process. Altmetrics grew to dominate the Webometrics specialty and brought it to a sharp increase during 2016-2020. Originality/value Author co-citation analysis (ACA) is effective in revealing intellectual structures of research fields. Most related studies used term-based methods to identify individual research topics but did not examine the interrelationships between these topics or the overall structure of the field. The few studies that did discuss the overall structure paid little attention to the effect of changes to the source journals on the results. The present study does not have these problems and continues the long history of benchmark contributions to a better understanding of the information science field using ACA.

Languages

  • e 656
  • d 114
  • pt 3
  • m 2
  • sp 1
  • More… Less…

Types

  • a 731
  • el 74
  • m 23
  • p 7
  • s 6
  • A 1
  • EL 1
  • x 1
  • More… Less…

Subjects

Classifications