Search (764 results, page 1 of 39)

  • × year_i:[2020 TO 2030}
  1. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.20
    0.19748023 = product of:
      0.26330698 = sum of:
        0.06296344 = product of:
          0.18889032 = sum of:
            0.18889032 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.18889032 = score(doc=1000,freq=2.0), product of:
                0.40331158 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.047571484 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.18889032 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.18889032 = score(doc=1000,freq=2.0), product of:
            0.40331158 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.047571484 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.011453216 = weight(_text_:information in 1000) [ClassicSimilarity], result of:
          0.011453216 = score(doc=1000,freq=4.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.13714671 = fieldWeight in 1000, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.75 = coord(3/4)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
    Imprint
    Wien / Library and Information Studies : Universität
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.15
    0.15111226 = product of:
      0.30222452 = sum of:
        0.07555613 = product of:
          0.22666839 = sum of:
            0.22666839 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.22666839 = score(doc=862,freq=2.0), product of:
                0.40331158 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.047571484 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.22666839 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.22666839 = score(doc=862,freq=2.0), product of:
            0.40331158 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.047571484 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.5 = coord(2/4)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  3. Bergman, O.; Israeli, T.; Whittaker, S.: Factors hindering shared files retrieval (2020) 0.06
    0.063188285 = product of:
      0.12637657 = sum of:
        0.018109124 = weight(_text_:information in 5843) [ClassicSimilarity], result of:
          0.018109124 = score(doc=5843,freq=10.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.21684799 = fieldWeight in 5843, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5843)
        0.10826744 = sum of:
          0.07604104 = weight(_text_:retrieval in 5843) [ClassicSimilarity], result of:
            0.07604104 = score(doc=5843,freq=20.0), product of:
              0.1438997 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.047571484 = queryNorm
              0.5284309 = fieldWeight in 5843, product of:
                4.472136 = tf(freq=20.0), with freq of:
                  20.0 = termFreq=20.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5843)
          0.0322264 = weight(_text_:22 in 5843) [ClassicSimilarity], result of:
            0.0322264 = score(doc=5843,freq=2.0), product of:
              0.16658723 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047571484 = queryNorm
              0.19345059 = fieldWeight in 5843, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5843)
      0.5 = coord(2/4)
    
    Abstract
    Purpose Personal information management (PIM) is an activity in which people store information items in order to retrieve them later. The purpose of this paper is to test and quantify the effect of factors related to collection size, file properties and workload on file retrieval success and efficiency. Design/methodology/approach In the study, 289 participants retrieved 1,557 of their shared files in a naturalistic setting. The study used specially developed software designed to collect shared files' names and present them as targets for the retrieval task. The dependent variables were retrieval success, retrieval time and misstep/s. Findings Various factors compromise shared files retrieval including: collection size (large number of files), file properties (multiple versions, size of team sharing the file, time since most recent retrieval and folder depth) and workload (daily e-mails sent and received). The authors discuss theoretical reasons for these negative effects and suggest possible ways to overcome them. Originality/value Retrieval is the main reason people manage personal information. It is essential for retrieval to be successful and efficient, as information cannot be used unless it can be re-accessed. Prior PIM research has assumed that factors related to collection size, file properties and workload affect file retrieval. However, this is the first study to systematically quantify the negative effects of these factors. As each of these factors is expected to be exacerbated in the future, this study is a necessary first step toward addressing these problems.
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 72(2020) no.1, S.130-147
  4. Das, S.; Paik, J.H.: Gender tagging of named entities using retrieval-assisted multi-context aggregation : an unsupervised approach (2023) 0.05
    0.048156153 = product of:
      0.09631231 = sum of:
        0.016832722 = weight(_text_:information in 941) [ClassicSimilarity], result of:
          0.016832722 = score(doc=941,freq=6.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.20156369 = fieldWeight in 941, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=941)
        0.07947958 = sum of:
          0.040807907 = weight(_text_:retrieval in 941) [ClassicSimilarity], result of:
            0.040807907 = score(doc=941,freq=4.0), product of:
              0.1438997 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.047571484 = queryNorm
              0.2835858 = fieldWeight in 941, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.046875 = fieldNorm(doc=941)
          0.038671676 = weight(_text_:22 in 941) [ClassicSimilarity], result of:
            0.038671676 = score(doc=941,freq=2.0), product of:
              0.16658723 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047571484 = queryNorm
              0.23214069 = fieldWeight in 941, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=941)
      0.5 = coord(2/4)
    
    Abstract
    Inferring the gender of named entities present in a text has several practical applications in information sciences. Existing approaches toward name gender identification rely exclusively on using the gender distributions from labeled data. In the absence of such labeled data, these methods fail. In this article, we propose a two-stage model that is able to infer the gender of names present in text without requiring explicit name-gender labels. We use coreference resolution as the backbone for our proposed model. To aid coreference resolution where the existing contextual information does not suffice, we use a retrieval-assisted context aggregation framework. We demonstrate that state-of-the-art name gender inference is possible without supervision. Our proposed method matches or outperforms several supervised approaches and commercially used methods on five English language datasets from different domains.
    Date
    22. 3.2023 12:00:14
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.4, S.461-475
  5. Hartel, J.: ¬The red thread of information (2020) 0.05
    0.04624547 = product of:
      0.09249094 = sum of:
        0.03621825 = weight(_text_:information in 5839) [ClassicSimilarity], result of:
          0.03621825 = score(doc=5839,freq=40.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.43369597 = fieldWeight in 5839, product of:
              6.3245554 = tf(freq=40.0), with freq of:
                40.0 = termFreq=40.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5839)
        0.05627269 = sum of:
          0.02404629 = weight(_text_:retrieval in 5839) [ClassicSimilarity], result of:
            0.02404629 = score(doc=5839,freq=2.0), product of:
              0.1438997 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.047571484 = queryNorm
              0.16710453 = fieldWeight in 5839, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5839)
          0.0322264 = weight(_text_:22 in 5839) [ClassicSimilarity], result of:
            0.0322264 = score(doc=5839,freq=2.0), product of:
              0.16658723 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047571484 = queryNorm
              0.19345059 = fieldWeight in 5839, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5839)
      0.5 = coord(2/4)
    
    Abstract
    Purpose In The Invisible Substrate of Information Science, a landmark article about the discipline of information science, Marcia J. Bates wrote that ".we are always looking for the red thread of information in the social texture of people's lives" (1999a, p. 1048). To sharpen our understanding of information science and to elaborate Bates' idea, the work at hand answers the question: Just what does the red thread of information entail? Design/methodology/approach Through a close reading of Bates' oeuvre and by applying concepts from the reference literature of information science, nine composite entities that qualify as the red thread of information are identified, elaborated, and related to existing concepts in the information science literature. In the spirit of a scientist-poet (White, 1999), several playful metaphors related to the color red are employed. Findings Bates' red thread of information entails: terms, genres, literatures, classification systems, scholarly communication, information retrieval, information experience, information institutions, and information policy. This same constellation of phenomena can be found in resonant visions of information science, namely, domain analysis (Hjørland, 2002), ethnography of infrastructure (Star, 1999), and social epistemology (Shera, 1968). Research limitations/implications With the vital vermilion filament in clear view, newcomers can more easily engage the material, conceptual, and social machinery of information science, and specialists are reminded of what constitutes information science as a whole. Future researchers and scientist-poets may wish to supplement the nine composite entities with additional, emergent information phenomena. Originality/value Though the explication of information science that follows is relatively orthodox and time-bound, the paper offers an imaginative, accessible, yet technically precise way of understanding the field.
    Date
    30. 4.2020 21:03:22
    Theme
    Information
  6. Chi, Y.; He, D.; Jeng, W.: Laypeople's source selection in online health information-seeking process (2020) 0.04
    0.040284313 = product of:
      0.08056863 = sum of:
        0.02429594 = weight(_text_:information in 34) [ClassicSimilarity], result of:
          0.02429594 = score(doc=34,freq=18.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.2909321 = fieldWeight in 34, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=34)
        0.05627269 = sum of:
          0.02404629 = weight(_text_:retrieval in 34) [ClassicSimilarity], result of:
            0.02404629 = score(doc=34,freq=2.0), product of:
              0.1438997 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.047571484 = queryNorm
              0.16710453 = fieldWeight in 34, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.0390625 = fieldNorm(doc=34)
          0.0322264 = weight(_text_:22 in 34) [ClassicSimilarity], result of:
            0.0322264 = score(doc=34,freq=2.0), product of:
              0.16658723 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047571484 = queryNorm
              0.19345059 = fieldWeight in 34, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=34)
      0.5 = coord(2/4)
    
    Abstract
    For laypeople, searching online health information resources can be challenging due to topic complexity and the large number of online sources with differing quality. The goal of this article is to examine, among all the available online sources, which online sources laypeople select to address their health-related information needs, and whether or how much the severity of a health condition influences their selection. Twenty-four participants were recruited individually, and each was asked (using a retrieval system called HIS) to search for information regarding a severe health condition and a mild health condition, respectively. The selected online health information sources were automatically captured by the HIS system and classified at both the website and webpage levels. Participants' selection behavior patterns were then plotted across the whole information-seeking process. Our results demonstrate that laypeople's source selection fluctuates during the health information-seeking process, and also varies by the severity of health conditions. This study reveals laypeople's real usage of different types of online health information sources, and engenders implications to the design of search engines, as well as the development of health literacy programs.
    Date
    12.11.2020 13:22:09
    Source
    Journal of the Association for Information Science and Technology. 71(2020) no.12, S.1484-1499
  7. Bedford, D.: Knowledge architectures : structures and semantics (2021) 0.04
    0.037237264 = product of:
      0.07447453 = sum of:
        0.021488138 = weight(_text_:information in 566) [ClassicSimilarity], result of:
          0.021488138 = score(doc=566,freq=22.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.25731003 = fieldWeight in 566, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=566)
        0.05298639 = sum of:
          0.027205272 = weight(_text_:retrieval in 566) [ClassicSimilarity], result of:
            0.027205272 = score(doc=566,freq=4.0), product of:
              0.1438997 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.047571484 = queryNorm
              0.18905719 = fieldWeight in 566, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.03125 = fieldNorm(doc=566)
          0.025781117 = weight(_text_:22 in 566) [ClassicSimilarity], result of:
            0.025781117 = score(doc=566,freq=2.0), product of:
              0.16658723 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047571484 = queryNorm
              0.15476047 = fieldWeight in 566, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=566)
      0.5 = coord(2/4)
    
    Abstract
    Knowledge Architectures reviews traditional approaches to managing information and explains why they need to adapt to support 21st-century information management and discovery. Exploring the rapidly changing environment in which information is being managed and accessed, the book considers how to use knowledge architectures, the basic structures and designs that underlie all of the parts of an effective information system, to best advantage. Drawing on 40 years of work with a variety of organizations, Bedford explains that failure to understand the structure behind any given system can be the difference between an effective solution and a significant and costly failure. Demonstrating that the information user environment has shifted significantly in the past 20 years, the book explains that end users now expect designs and behaviors that are much closer to the way they think, work, and act. Acknowledging how important it is that those responsible for developing an information or knowledge management system understand knowledge structures, the book goes beyond a traditional library science perspective and uses case studies to help translate the abstract and theoretical to the practical and concrete. Explaining the structures in a simple and intuitive way and providing examples that clearly illustrate the challenges faced by a range of different organizations, Knowledge Architectures is essential reading for those studying and working in library and information science, data science, systems development, database design, and search system architecture and engineering.
    Content
    Section 1 Context and purpose of knowledge architecture -- 1 Making the case for knowledge architecture -- 2 The landscape of knowledge assets -- 3 Knowledge architecture and design -- 4 Knowledge architecture reference model -- 5 Knowledge architecture segments -- Section 2 Designing for availability -- 6 Knowledge object modeling -- 7 Knowledge structures for encoding, formatting, and packaging -- 8 Functional architecture for identification and distinction -- 9 Functional architectures for knowledge asset disposition and destruction -- 10 Functional architecture designs for knowledge preservation and conservation -- Section 3 Designing for accessibility -- 11 Functional architectures for knowledge seeking and discovery -- 12 Functional architecture for knowledge search -- 13 Functional architecture for knowledge categorization -- 14 Functional architectures for indexing and keywording -- 15 Functional architecture for knowledge semantics -- 16 Functional architecture for knowledge abstraction and surrogation -- Section 4 Functional architectures to support knowledge consumption -- 17 Functional architecture for knowledge augmentation, derivation, and synthesis -- 18 Functional architecture to manage risk and harm -- 19 Functional architectures for knowledge authentication and provenance -- 20 Functional architectures for securing knowledge assets -- 21 Functional architectures for authorization and asset management -- Section 5 Pulling it all together - the big picture knowledge architecture -- 22 Functional architecture for knowledge metadata and metainformation -- 23 The whole knowledge architecture - pulling it all together
    LCSH
    Information science
    Information storage and retrieval systems / Management
    Subject
    Information science
    Information storage and retrieval systems / Management
  8. Jiang, Y.; Meng, R.; Huang, Y.; Lu, W.; Liu, J.: Generating keyphrases for readers : a controllable keyphrase generation framework (2023) 0.04
    0.03623499 = product of:
      0.07246998 = sum of:
        0.016197294 = weight(_text_:information in 1012) [ClassicSimilarity], result of:
          0.016197294 = score(doc=1012,freq=8.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.19395474 = fieldWeight in 1012, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1012)
        0.05627269 = sum of:
          0.02404629 = weight(_text_:retrieval in 1012) [ClassicSimilarity], result of:
            0.02404629 = score(doc=1012,freq=2.0), product of:
              0.1438997 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.047571484 = queryNorm
              0.16710453 = fieldWeight in 1012, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1012)
          0.0322264 = weight(_text_:22 in 1012) [ClassicSimilarity], result of:
            0.0322264 = score(doc=1012,freq=2.0), product of:
              0.16658723 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047571484 = queryNorm
              0.19345059 = fieldWeight in 1012, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1012)
      0.5 = coord(2/4)
    
    Abstract
    With the wide application of keyphrases in many Information Retrieval (IR) and Natural Language Processing (NLP) tasks, automatic keyphrase prediction has been emerging. However, these statistically important phrases are contributing increasingly less to the related tasks because the end-to-end learning mechanism enables models to learn the important semantic information of the text directly. Similarly, keyphrases are of little help for readers to quickly grasp the paper's main idea because the relationship between the keyphrase and the paper is not explicit to readers. Therefore, we propose to generate keyphrases with specific functions for readers to bridge the semantic gap between them and the information producers, and verify the effectiveness of the keyphrase function for assisting users' comprehension with a user experiment. A controllable keyphrase generation framework (the CKPG) that uses the keyphrase function as a control code to generate categorized keyphrases is proposed and implemented based on Transformer, BART, and T5, respectively. For the Computer Science domain, the Macro-avgs of , , and on the Paper with Code dataset are up to 0.680, 0.535, and 0.558, respectively. Our experimental results indicate the effectiveness of the CKPG models.
    Date
    22. 6.2023 14:55:20
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.7, S.759-774
  9. Fugmann, R.: What is information? : an information veteran looks back (2022) 0.03
    0.034222323 = product of:
      0.06844465 = sum of:
        0.03621825 = weight(_text_:information in 1085) [ClassicSimilarity], result of:
          0.03621825 = score(doc=1085,freq=10.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.43369597 = fieldWeight in 1085, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=1085)
        0.0322264 = product of:
          0.0644528 = sum of:
            0.0644528 = weight(_text_:22 in 1085) [ClassicSimilarity], result of:
              0.0644528 = score(doc=1085,freq=2.0), product of:
                0.16658723 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047571484 = queryNorm
                0.38690117 = fieldWeight in 1085, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1085)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    Vgl.: https://www.nomos-elibrary.de/10.5771/0943-7444-2022-1-3/what-is-information-an-information-veteran-looks-back-jahrgang-49-2022-heft-1?page=1.
    Date
    18. 8.2022 19:22:57
    Theme
    Information
  10. Wu, Z.; Lu, C.; Zhao, Y.; Xie, J.; Zou, D.; Su, X.: ¬The protection of user preference privacy in personalized information retrieval : challenges and overviews (2021) 0.03
    0.02845651 = product of:
      0.05691302 = sum of:
        0.022906432 = weight(_text_:information in 520) [ClassicSimilarity], result of:
          0.022906432 = score(doc=520,freq=16.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.27429342 = fieldWeight in 520, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=520)
        0.03400659 = product of:
          0.06801318 = sum of:
            0.06801318 = weight(_text_:retrieval in 520) [ClassicSimilarity], result of:
              0.06801318 = score(doc=520,freq=16.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.47264296 = fieldWeight in 520, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=520)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This paper reviews a large number of research achievements relevant to user privacy protection in an untrusted network environment, and then analyzes and evaluates their application limitations in personalized information retrieval, to establish the conditional constraints that an effective approach for user preference privacy protection in personalized information retrieval should meet, thus providing a basic reference for the solution of this problem. First, based on the basic framework of a personalized information retrieval platform, we establish a complete set of constraints for user preference privacy protection in terms of security, usability, efficiency, and accuracy. Then, we comprehensively review the technical features for all kinds of popular methods for user privacy protection, and analyze their application limitations in personalized information retrieval, according to the constraints of preference privacy protection. The results show that personalized information retrieval has higher requirements for users' privacy protection, i.e., it is required to comprehensively improve the security of users' preference privacy on the untrusted server-side, under the precondition of not changing the platform, algorithm, efficiency, and accuracy of personalized information retrieval. However, all kinds of existing privacy methods still cannot meet the above requirements. This paper is an important study attempt to the problem of user preference privacy protection of personalized information retrieval, which can provide a basic reference and direction for the further study of the problem.
  11. Aizawa, A.; Kohlhase, M.: Mathematical information retrieval (2021) 0.03
    0.028170511 = product of:
      0.056341022 = sum of:
        0.022676213 = weight(_text_:information in 667) [ClassicSimilarity], result of:
          0.022676213 = score(doc=667,freq=8.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.27153665 = fieldWeight in 667, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=667)
        0.033664808 = product of:
          0.067329615 = sum of:
            0.067329615 = weight(_text_:retrieval in 667) [ClassicSimilarity], result of:
              0.067329615 = score(doc=667,freq=8.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.46789268 = fieldWeight in 667, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=667)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    We present an overview of the NTCIR Math Tasks organized during NTCIR-10, 11, and 12. These tasks are primarily dedicated to techniques for searching mathematical content with formula expressions. In this chapter, we first summarize the task design and introduce test collections generated in the tasks. We also describe the features and main challenges of mathematical information retrieval systems and discuss future perspectives in the field.
    Series
    ¬The Information retrieval series, vol 43
    Source
    Evaluating information retrieval and access tasks. Eds.: Sakai, T., Oard, D., Kando, N. [https://doi.org/10.1007/978-981-15-5554-1_12]
  12. Hjoerland, B.: Information (2023) 0.03
    0.028054379 = product of:
      0.056108758 = sum of:
        0.039276354 = weight(_text_:information in 1118) [ClassicSimilarity], result of:
          0.039276354 = score(doc=1118,freq=24.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.47031528 = fieldWeight in 1118, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1118)
        0.016832404 = product of:
          0.033664808 = sum of:
            0.033664808 = weight(_text_:retrieval in 1118) [ClassicSimilarity], result of:
              0.033664808 = score(doc=1118,freq=2.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.23394634 = fieldWeight in 1118, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1118)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This article presents a brief history of the term "information" and its different meanings, which are both important and difficult because the different meanings of the term imply whole theories of knowledge. The article further considers the relation between "information" and the concepts "matter and energy", "data", "sign and meaning", "knowledge" and "communication". It presents and analyses the influence of information in information studies and knowledge organization and contains a presentation and critical analysis of some compound terms such as "information need", "information overload" and "information retrieval", which illuminate the use of the term information in information studies. An appendix provides a chronological list of definitions of information.
    Theme
    Information
  13. Petras, V.; Womser-Hacker, C.: Evaluation im Information Retrieval (2023) 0.03
    0.028033273 = product of:
      0.056066547 = sum of:
        0.023805063 = weight(_text_:information in 808) [ClassicSimilarity], result of:
          0.023805063 = score(doc=808,freq=12.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.2850541 = fieldWeight in 808, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=808)
        0.032261483 = product of:
          0.06452297 = sum of:
            0.06452297 = weight(_text_:retrieval in 808) [ClassicSimilarity], result of:
              0.06452297 = score(doc=808,freq=10.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.44838852 = fieldWeight in 808, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=808)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Das Ziel einer Evaluation ist die Überprüfung, ob bzw. in welchem Ausmaß ein Informationssystem die an das System gestellten Anforderungen erfüllt. Informationssysteme können aus verschiedenen Perspektiven evaluiert werden. Für eine ganzheitliche Evaluation (als Synonym wird auch Evaluierung benutzt), die unterschiedliche Qualitätsaspekte betrachtet (z. B. wie gut ein System relevante Dokumente rankt, wie schnell ein System die Suche durchführt, wie die Ergebnispräsentation gestaltet ist oder wie Suchende durch das System geführt werden) und die Erfüllung mehrerer Anforderungen überprüft, empfiehlt es sich, sowohl eine perspektivische als auch methodische Triangulation (d. h. der Einsatz von mehreren Ansätzen zur Qualitätsüberprüfung) vorzunehmen. Im Information Retrieval (IR) konzentriert sich die Evaluation auf die Qualitätseinschätzung der Suchfunktion eines Information-Retrieval-Systems (IRS), wobei oft zwischen systemzentrierter und nutzerzentrierter Evaluation unterschieden wird. Dieses Kapitel setzt den Fokus auf die systemzentrierte Evaluation, während andere Kapitel dieses Handbuchs andere Evaluationsansätze diskutieren (s. Kapitel C 4 Interaktives Information Retrieval, C 7 Cross-Language Information Retrieval und D 1 Information Behavior).
  14. Strecker, D.: Dataset Retrieval : Informationsverhalten von Datensuchenden und das Ökosystem von Data-Retrieval-Systemen (2022) 0.03
    0.027986575 = product of:
      0.05597315 = sum of:
        0.012957836 = weight(_text_:information in 4021) [ClassicSimilarity], result of:
          0.012957836 = score(doc=4021,freq=2.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.1551638 = fieldWeight in 4021, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=4021)
        0.043015312 = product of:
          0.086030625 = sum of:
            0.086030625 = weight(_text_:retrieval in 4021) [ClassicSimilarity], result of:
              0.086030625 = score(doc=4021,freq=10.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.59785134 = fieldWeight in 4021, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4021)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Verschiedene Stakeholder fordern eine bessere Verfügbarkeit von Forschungsdaten. Der Erfolg dieser Initiativen hängt wesentlich von einer guten Auffindbarkeit der publizierten Datensätze ab, weshalb Dataset Retrieval an Bedeutung gewinnt. Dataset Retrieval ist eine Sonderform von Information Retrieval, die sich mit dem Auffinden von Datensätzen befasst. Dieser Beitrag fasst aktuelle Forschungsergebnisse über das Informationsverhalten von Datensuchenden zusammen. Anschließend werden beispielhaft zwei Suchdienste verschiedener Ausrichtung vorgestellt und verglichen. Um darzulegen, wie diese Dienste ineinandergreifen, werden inhaltliche Überschneidungen von Datenbeständen genutzt, um den Metadatenaustausch zu analysieren.
  15. Hertzum, M.: Information seeking by experimentation : trying something out to discover what happens (2023) 0.03
    0.02718797 = product of:
      0.05437594 = sum of:
        0.035040103 = weight(_text_:information in 915) [ClassicSimilarity], result of:
          0.035040103 = score(doc=915,freq=26.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.41958824 = fieldWeight in 915, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=915)
        0.019335838 = product of:
          0.038671676 = sum of:
            0.038671676 = weight(_text_:22 in 915) [ClassicSimilarity], result of:
              0.038671676 = score(doc=915,freq=2.0), product of:
                0.16658723 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047571484 = queryNorm
                0.23214069 = fieldWeight in 915, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=915)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Experimentation is the process of trying something out to discover what happens. It is a widespread information practice, yet often bypassed in information-behavior research. This article argues that experimentation complements prior knowledge, documents, and people as an important fourth class of information sources. Relative to the other classes, the distinguishing characteristics of experimentation are that it is a personal-as opposed to interpersonal-source and that it provides "backtalk." When the information seeker tries something out and then attends to the resulting situation, it is as though the materials of the situation talk back: They provide the information seeker with a situated and direct experience of the consequences of the tried-out options. In this way, experimentation involves obtaining information by creating it. It also involves turning material and behavioral processes into information interactions. Thereby, information seeking by experimentation is important to practical information literacy and extends information-behavior research with new insights on the interrelations between creating and seeking information.
    Date
    21. 3.2023 19:22:29
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.4, S.383-387
  16. Huvila, I.: Making and taking information (2022) 0.03
    0.02665064 = product of:
      0.05330128 = sum of:
        0.038873505 = weight(_text_:information in 527) [ClassicSimilarity], result of:
          0.038873505 = score(doc=527,freq=32.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.46549138 = fieldWeight in 527, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=527)
        0.014427775 = product of:
          0.02885555 = sum of:
            0.02885555 = weight(_text_:retrieval in 527) [ClassicSimilarity], result of:
              0.02885555 = score(doc=527,freq=2.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.20052543 = fieldWeight in 527, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=527)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Information behavior theory covers different aspects of the totality of information-related human behavior rather unevenly. The transitions or trading zones between different types of information activities have remained perhaps especially under-theorized. This article interrogates and expands a conceptual apparatus of information making and information taking as a pair of substantial concepts for explaining, in part, the mobility of information in terms of doing that unfolds as a process of becoming rather than of being, and in part, what is happening when information comes into being and when something is taken up for use as information. Besides providing an apparatus to describe the nexus of information provision and acquisition, a closer consideration of the parallel doings opens opportunities to enrich the inquiry of the conditions and practice of information seeking, appropriation, discovery, and retrieval as modes taking, and learning and information use as its posterities.
    Series
    JASIS&Tspecial issue on information behavior and information practices theory
    Source
    Journal of the Association for Information Science and Technology. 73(2022) no.4, S.528-541
    Theme
    Information
  17. Kuehn, E.F.: ¬The information ecosystem concept in information literacy : a theoretical approach and definition (2023) 0.03
    0.025784023 = product of:
      0.051568046 = sum of:
        0.032232206 = weight(_text_:information in 919) [ClassicSimilarity], result of:
          0.032232206 = score(doc=919,freq=22.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.38596505 = fieldWeight in 919, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=919)
        0.019335838 = product of:
          0.038671676 = sum of:
            0.038671676 = weight(_text_:22 in 919) [ClassicSimilarity], result of:
              0.038671676 = score(doc=919,freq=2.0), product of:
                0.16658723 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047571484 = queryNorm
                0.23214069 = fieldWeight in 919, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=919)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Despite the prominence of the concept of the information ecosystem (hereafter IE) in information literacy documents and literature, it is under-theorized. This article proposes a general definition of IE for information literacy. After reviewing the current use of the IE concept in the Association of College and Research Libraries (ACRL) Framework for Information Literacy and other information literacy sources, existing definitions of IE and similar concepts (e.g., "evidence ecosystems") will be examined from other fields. These will form the basis of the definition of IE proposed in the article for the field of information literacy: "all structures, entities, and agents related to the flow of semantic information relevant to a research domain, as well as the information itself."
    Date
    22. 3.2023 11:52:50
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.4, S.434-443
  18. Belabbes, M.A.; Ruthven, I.; Moshfeghi, Y.; Rasmussen Pennington, D.: Information overload : a concept analysis (2023) 0.03
    0.02570719 = product of:
      0.05141438 = sum of:
        0.035301182 = weight(_text_:information in 950) [ClassicSimilarity], result of:
          0.035301182 = score(doc=950,freq=38.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.42271453 = fieldWeight in 950, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=950)
        0.0161132 = product of:
          0.0322264 = sum of:
            0.0322264 = weight(_text_:22 in 950) [ClassicSimilarity], result of:
              0.0322264 = score(doc=950,freq=2.0), product of:
                0.16658723 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047571484 = queryNorm
                0.19345059 = fieldWeight in 950, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=950)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose With the shift to an information-based society and to the de-centralisation of information, information overload has attracted a growing interest in the computer and information science research communities. However, there is no clear understanding of the meaning of the term, and while there have been many proposed definitions, there is no consensus. The goal of this work was to define the concept of "information overload". In order to do so, a concept analysis using Rodgers' approach was performed. Design/methodology/approach A concept analysis using Rodgers' approach based on a corpus of documents published between 2010 and September 2020 was conducted. One surrogate for "information overload", which is "cognitive overload" was identified. The corpus of documents consisted of 151 documents for information overload and ten for cognitive overload. All documents were from the fields of computer science and information science, and were retrieved from three databases: Association for Computing Machinery (ACM) Digital Library, SCOPUS and Library and Information Science Abstracts (LISA). Findings The themes identified from the authors' concept analysis allowed us to extract the triggers, manifestations and consequences of information overload. They found triggers related to information characteristics, information need, the working environment, the cognitive abilities of individuals and the information environment. In terms of manifestations, they found that information overload manifests itself both emotionally and cognitively. The consequences of information overload were both internal and external. These findings allowed them to provide a definition of information overload. Originality/value Through the authors' concept analysis, they were able to clarify the components of information overload and provide a definition of the concept.
    Date
    22. 4.2023 19:27:56
    Theme
    Information
  19. Qi, Q.; Hessen, D.J.; Heijden, P.G.M. van der: Improving information retrieval through correspondenceanalysis instead of latent semantic analysis (2023) 0.03
    0.02529325 = product of:
      0.0505865 = sum of:
        0.021730952 = weight(_text_:information in 1045) [ClassicSimilarity], result of:
          0.021730952 = score(doc=1045,freq=10.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.2602176 = fieldWeight in 1045, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1045)
        0.02885555 = product of:
          0.0577111 = sum of:
            0.0577111 = weight(_text_:retrieval in 1045) [ClassicSimilarity], result of:
              0.0577111 = score(doc=1045,freq=8.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.40105087 = fieldWeight in 1045, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1045)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The initial dimensions extracted by latent semantic analysis (LSA) of a document-term matrixhave been shown to mainly display marginal effects, which are irrelevant for informationretrieval. To improve the performance of LSA, usually the elements of the raw document-term matrix are weighted and the weighting exponent of singular values can be adjusted.An alternative information retrieval technique that ignores the marginal effects is correspon-dence analysis (CA). In this paper, the information retrieval performance of LSA and CA isempirically compared. Moreover, it is explored whether the two weightings also improve theperformance of CA. The results for four empirical datasets show that CA always performsbetter than LSA. Weighting the elements of the raw data matrix can improve CA; however,it is data dependent and the improvement is small. Adjusting the singular value weightingexponent often improves the performance of CA; however, the extent of the improvementdepends on the dataset and the number of dimensions. (PDF) Improving information retrieval through correspondence analysis instead of latent semantic analysis.
    Source
    Journal of intelligent information systems [https://doi.org/10.1007/s10844-023-00815-y]
  20. Soshnikov, D.: ROMEO: an ontology-based multi-agent architecture for online information retrieval (2021) 0.02
    0.024824452 = product of:
      0.049648903 = sum of:
        0.02244363 = weight(_text_:information in 249) [ClassicSimilarity], result of:
          0.02244363 = score(doc=249,freq=6.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.2687516 = fieldWeight in 249, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=249)
        0.027205272 = product of:
          0.054410543 = sum of:
            0.054410543 = weight(_text_:retrieval in 249) [ClassicSimilarity], result of:
              0.054410543 = score(doc=249,freq=4.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.37811437 = fieldWeight in 249, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=249)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This paper describes an approach to path-finding in the intelligent graphs, with vertices being intelligent agents. A possible implementation of this approach is described, based on logical inference in distributed frame hierarchy. Presented approach can be used for implementing distributed intelligent information systems that include automatic navigation and path generation in hypertext, which can be used, for example in distance education, as well as for organizing intelligent web catalogues with flexible ontology-based information retrieval.

Languages

  • e 645
  • d 114
  • pt 3
  • m 2
  • sp 1
  • More… Less…

Types

  • a 722
  • el 72
  • m 23
  • p 6
  • s 6
  • A 1
  • EL 1
  • x 1
  • More… Less…

Subjects

Classifications