Search (814 results, page 1 of 41)

  • × year_i:[2020 TO 2030}
  1. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.20
    0.2044285 = product of:
      0.35774985 = sum of:
        0.049812686 = product of:
          0.14943805 = sum of:
            0.14943805 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.14943805 = score(doc=1000,freq=2.0), product of:
                0.31907457 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.037635546 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.14943805 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.14943805 = score(doc=1000,freq=2.0), product of:
            0.31907457 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037635546 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.009061059 = weight(_text_:information in 1000) [ClassicSimilarity], result of:
          0.009061059 = score(doc=1000,freq=4.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.13714671 = fieldWeight in 1000, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.14943805 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.14943805 = score(doc=1000,freq=2.0), product of:
            0.31907457 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037635546 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.5714286 = coord(4/7)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
    Imprint
    Wien / Library and Information Studies : Universität
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.18
    0.17932567 = product of:
      0.41842657 = sum of:
        0.059775226 = product of:
          0.17932567 = sum of:
            0.17932567 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.17932567 = score(doc=862,freq=2.0), product of:
                0.31907457 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.037635546 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.17932567 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.17932567 = score(doc=862,freq=2.0), product of:
            0.31907457 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037635546 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.17932567 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.17932567 = score(doc=862,freq=2.0), product of:
            0.31907457 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037635546 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.42857143 = coord(3/7)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  3. Bergman, O.; Israeli, T.; Whittaker, S.: Factors hindering shared files retrieval (2020) 0.04
    0.035564635 = product of:
      0.08298415 = sum of:
        0.014326792 = weight(_text_:information in 5843) [ClassicSimilarity], result of:
          0.014326792 = score(doc=5843,freq=10.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.21684799 = fieldWeight in 5843, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5843)
        0.06015886 = weight(_text_:retrieval in 5843) [ClassicSimilarity], result of:
          0.06015886 = score(doc=5843,freq=20.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.5284309 = fieldWeight in 5843, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5843)
        0.008498495 = product of:
          0.025495486 = sum of:
            0.025495486 = weight(_text_:22 in 5843) [ClassicSimilarity], result of:
              0.025495486 = score(doc=5843,freq=2.0), product of:
                0.13179328 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037635546 = queryNorm
                0.19345059 = fieldWeight in 5843, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5843)
          0.33333334 = coord(1/3)
      0.42857143 = coord(3/7)
    
    Abstract
    Purpose Personal information management (PIM) is an activity in which people store information items in order to retrieve them later. The purpose of this paper is to test and quantify the effect of factors related to collection size, file properties and workload on file retrieval success and efficiency. Design/methodology/approach In the study, 289 participants retrieved 1,557 of their shared files in a naturalistic setting. The study used specially developed software designed to collect shared files' names and present them as targets for the retrieval task. The dependent variables were retrieval success, retrieval time and misstep/s. Findings Various factors compromise shared files retrieval including: collection size (large number of files), file properties (multiple versions, size of team sharing the file, time since most recent retrieval and folder depth) and workload (daily e-mails sent and received). The authors discuss theoretical reasons for these negative effects and suggest possible ways to overcome them. Originality/value Retrieval is the main reason people manage personal information. It is essential for retrieval to be successful and efficient, as information cannot be used unless it can be re-accessed. Prior PIM research has assumed that factors related to collection size, file properties and workload affect file retrieval. However, this is the first study to systematically quantify the negative effects of these factors. As each of these factors is expected to be exacerbated in the future, this study is a necessary first step toward addressing these problems.
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 72(2020) no.1, S.130-147
  4. Qi, Q.; Hessen, D.J.; Heijden, P.G.M. van der: Improving information retrieval through correspondenceanalysis instead of latent semantic analysis (2023) 0.03
    0.031345837 = product of:
      0.073140286 = sum of:
        0.017192151 = weight(_text_:information in 1045) [ClassicSimilarity], result of:
          0.017192151 = score(doc=1045,freq=10.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.2602176 = fieldWeight in 1045, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1045)
        0.045657367 = weight(_text_:retrieval in 1045) [ClassicSimilarity], result of:
          0.045657367 = score(doc=1045,freq=8.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.40105087 = fieldWeight in 1045, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=1045)
        0.010290766 = product of:
          0.030872298 = sum of:
            0.030872298 = weight(_text_:29 in 1045) [ClassicSimilarity], result of:
              0.030872298 = score(doc=1045,freq=2.0), product of:
                0.13239008 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037635546 = queryNorm
                0.23319192 = fieldWeight in 1045, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1045)
          0.33333334 = coord(1/3)
      0.42857143 = coord(3/7)
    
    Abstract
    The initial dimensions extracted by latent semantic analysis (LSA) of a document-term matrixhave been shown to mainly display marginal effects, which are irrelevant for informationretrieval. To improve the performance of LSA, usually the elements of the raw document-term matrix are weighted and the weighting exponent of singular values can be adjusted.An alternative information retrieval technique that ignores the marginal effects is correspon-dence analysis (CA). In this paper, the information retrieval performance of LSA and CA isempirically compared. Moreover, it is explored whether the two weightings also improve theperformance of CA. The results for four empirical datasets show that CA always performsbetter than LSA. Weighting the elements of the raw data matrix can improve CA; however,it is data dependent and the improvement is small. Adjusting the singular value weightingexponent often improves the performance of CA; however, the extent of the improvementdepends on the dataset and the number of dimensions. (PDF) Improving information retrieval through correspondence analysis instead of latent semantic analysis.
    Date
    15. 9.2023 12:28:29
    Source
    Journal of intelligent information systems [https://doi.org/10.1007/s10844-023-00815-y]
  5. Huvila, I.: Making and taking information (2022) 0.03
    0.027374443 = product of:
      0.0638737 = sum of:
        0.030754255 = weight(_text_:information in 527) [ClassicSimilarity], result of:
          0.030754255 = score(doc=527,freq=32.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.46549138 = fieldWeight in 527, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=527)
        0.022828683 = weight(_text_:retrieval in 527) [ClassicSimilarity], result of:
          0.022828683 = score(doc=527,freq=2.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.20052543 = fieldWeight in 527, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=527)
        0.010290766 = product of:
          0.030872298 = sum of:
            0.030872298 = weight(_text_:29 in 527) [ClassicSimilarity], result of:
              0.030872298 = score(doc=527,freq=2.0), product of:
                0.13239008 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037635546 = queryNorm
                0.23319192 = fieldWeight in 527, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=527)
          0.33333334 = coord(1/3)
      0.42857143 = coord(3/7)
    
    Abstract
    Information behavior theory covers different aspects of the totality of information-related human behavior rather unevenly. The transitions or trading zones between different types of information activities have remained perhaps especially under-theorized. This article interrogates and expands a conceptual apparatus of information making and information taking as a pair of substantial concepts for explaining, in part, the mobility of information in terms of doing that unfolds as a process of becoming rather than of being, and in part, what is happening when information comes into being and when something is taken up for use as information. Besides providing an apparatus to describe the nexus of information provision and acquisition, a closer consideration of the parallel doings opens opportunities to enrich the inquiry of the conditions and practice of information seeking, appropriation, discovery, and retrieval as modes taking, and learning and information use as its posterities.
    Date
    10. 3.2022 14:10:29
    Series
    JASIS&Tspecial issue on information behavior and information practices theory
    Source
    Journal of the Association for Information Science and Technology. 73(2022) no.4, S.528-541
    Theme
    Information
  6. Hartel, J.: ¬The red thread of information (2020) 0.02
    0.024075422 = product of:
      0.056175984 = sum of:
        0.028653584 = weight(_text_:information in 5839) [ClassicSimilarity], result of:
          0.028653584 = score(doc=5839,freq=40.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.43369597 = fieldWeight in 5839, product of:
              6.3245554 = tf(freq=40.0), with freq of:
                40.0 = termFreq=40.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5839)
        0.019023903 = weight(_text_:retrieval in 5839) [ClassicSimilarity], result of:
          0.019023903 = score(doc=5839,freq=2.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.16710453 = fieldWeight in 5839, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5839)
        0.008498495 = product of:
          0.025495486 = sum of:
            0.025495486 = weight(_text_:22 in 5839) [ClassicSimilarity], result of:
              0.025495486 = score(doc=5839,freq=2.0), product of:
                0.13179328 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037635546 = queryNorm
                0.19345059 = fieldWeight in 5839, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5839)
          0.33333334 = coord(1/3)
      0.42857143 = coord(3/7)
    
    Abstract
    Purpose In The Invisible Substrate of Information Science, a landmark article about the discipline of information science, Marcia J. Bates wrote that ".we are always looking for the red thread of information in the social texture of people's lives" (1999a, p. 1048). To sharpen our understanding of information science and to elaborate Bates' idea, the work at hand answers the question: Just what does the red thread of information entail? Design/methodology/approach Through a close reading of Bates' oeuvre and by applying concepts from the reference literature of information science, nine composite entities that qualify as the red thread of information are identified, elaborated, and related to existing concepts in the information science literature. In the spirit of a scientist-poet (White, 1999), several playful metaphors related to the color red are employed. Findings Bates' red thread of information entails: terms, genres, literatures, classification systems, scholarly communication, information retrieval, information experience, information institutions, and information policy. This same constellation of phenomena can be found in resonant visions of information science, namely, domain analysis (Hjørland, 2002), ethnography of infrastructure (Star, 1999), and social epistemology (Shera, 1968). Research limitations/implications With the vital vermilion filament in clear view, newcomers can more easily engage the material, conceptual, and social machinery of information science, and specialists are reminded of what constitutes information science as a whole. Future researchers and scientist-poets may wish to supplement the nine composite entities with additional, emergent information phenomena. Originality/value Though the explication of information science that follows is relatively orthodox and time-bound, the paper offers an imaginative, accessible, yet technically precise way of understanding the field.
    Date
    30. 4.2020 21:03:22
    Theme
    Information
  7. Das, S.; Paik, J.H.: Gender tagging of named entities using retrieval-assisted multi-context aggregation : an unsupervised approach (2023) 0.02
    0.023914203 = product of:
      0.05579981 = sum of:
        0.013316983 = weight(_text_:information in 941) [ClassicSimilarity], result of:
          0.013316983 = score(doc=941,freq=6.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.20156369 = fieldWeight in 941, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=941)
        0.032284632 = weight(_text_:retrieval in 941) [ClassicSimilarity], result of:
          0.032284632 = score(doc=941,freq=4.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.2835858 = fieldWeight in 941, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=941)
        0.0101981945 = product of:
          0.030594582 = sum of:
            0.030594582 = weight(_text_:22 in 941) [ClassicSimilarity], result of:
              0.030594582 = score(doc=941,freq=2.0), product of:
                0.13179328 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037635546 = queryNorm
                0.23214069 = fieldWeight in 941, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=941)
          0.33333334 = coord(1/3)
      0.42857143 = coord(3/7)
    
    Abstract
    Inferring the gender of named entities present in a text has several practical applications in information sciences. Existing approaches toward name gender identification rely exclusively on using the gender distributions from labeled data. In the absence of such labeled data, these methods fail. In this article, we propose a two-stage model that is able to infer the gender of names present in text without requiring explicit name-gender labels. We use coreference resolution as the backbone for our proposed model. To aid coreference resolution where the existing contextual information does not suffice, we use a retrieval-assisted context aggregation framework. We demonstrate that state-of-the-art name gender inference is possible without supervision. Our proposed method matches or outperforms several supervised approaches and commercially used methods on five English language datasets from different domains.
    Date
    22. 3.2023 12:00:14
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.4, S.461-475
  8. Strecker, D.: Dataset Retrieval : Informationsverhalten von Datensuchenden und das Ökosystem von Data-Retrieval-Systemen (2022) 0.02
    0.02237526 = product of:
      0.0783134 = sum of:
        0.010251419 = weight(_text_:information in 4021) [ClassicSimilarity], result of:
          0.010251419 = score(doc=4021,freq=2.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.1551638 = fieldWeight in 4021, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=4021)
        0.068061985 = weight(_text_:retrieval in 4021) [ClassicSimilarity], result of:
          0.068061985 = score(doc=4021,freq=10.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.59785134 = fieldWeight in 4021, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=4021)
      0.2857143 = coord(2/7)
    
    Abstract
    Verschiedene Stakeholder fordern eine bessere Verfügbarkeit von Forschungsdaten. Der Erfolg dieser Initiativen hängt wesentlich von einer guten Auffindbarkeit der publizierten Datensätze ab, weshalb Dataset Retrieval an Bedeutung gewinnt. Dataset Retrieval ist eine Sonderform von Information Retrieval, die sich mit dem Auffinden von Datensätzen befasst. Dieser Beitrag fasst aktuelle Forschungsergebnisse über das Informationsverhalten von Datensuchenden zusammen. Anschließend werden beispielhaft zwei Suchdienste verschiedener Ausrichtung vorgestellt und verglichen. Um darzulegen, wie diese Dienste ineinandergreifen, werden inhaltliche Überschneidungen von Datenbeständen genutzt, um den Metadatenaustausch zu analysieren.
  9. Fuhr, N.: Modelle im Information Retrieval (2023) 0.02
    0.02168017 = product of:
      0.050587066 = sum of:
        0.009061059 = weight(_text_:information in 800) [ClassicSimilarity], result of:
          0.009061059 = score(doc=800,freq=4.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.13714671 = fieldWeight in 800, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=800)
        0.032950368 = weight(_text_:retrieval in 800) [ClassicSimilarity], result of:
          0.032950368 = score(doc=800,freq=6.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.28943354 = fieldWeight in 800, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=800)
        0.008575639 = product of:
          0.025726916 = sum of:
            0.025726916 = weight(_text_:29 in 800) [ClassicSimilarity], result of:
              0.025726916 = score(doc=800,freq=2.0), product of:
                0.13239008 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037635546 = queryNorm
                0.19432661 = fieldWeight in 800, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=800)
          0.33333334 = coord(1/3)
      0.42857143 = coord(3/7)
    
    Abstract
    Information-Retrieval-Modelle -(IR-Modelle) spezifizieren, wie zu einer gegebenen Anfrage die Antwortdokumente aus einer Dokumentenkollektion bestimmt werden. Ausgangsbasis jedes Modells sind dabei zunächst bestimmte Annahmen über die Wissensrepräsentation (s. Teil B Methoden und Systeme der Inhaltserschließung) von Fragen und Dokumenten. Hier bezeichnen wir die Elemente dieser Repräsentationen als Terme, wobei es aus der Sicht des Modells egal ist, wie diese Terme aus dem Dokument (und analog aus der von Benutzenden eingegebenen Anfrage) abgeleitet werden: Bei Texten werden hierzu häufig computerlinguistische Methoden eingesetzt, aber auch komplexere automatische oder manuelle Erschließungsverfahren können zur Anwendung kommen. Repräsentationen besitzen ferner eine bestimmte Struktur. Ein Dokument wird meist als Menge oder Multimenge von Termen aufgefasst, wobei im zweiten Fall das Mehrfachvorkommen berücksichtigt wird. Diese Dokumentrepräsentation wird wiederum auf eine sogenannte Dokumentbeschreibung abgebildet, in der die einzelnen Terme gewichtet sein können. Im Folgenden unterscheiden wir nur zwischen ungewichteter (Gewicht eines Terms ist entweder 0 oder 1) und gewichteter Indexierung (das Gewicht ist eine nichtnegative reelle Zahl). Analog dazu gibt es eine Fragerepräsentation; legt man eine natürlichsprachige Anfrage zugrunde, so kann man die o. g. Verfahren für Dokumenttexte anwenden. Alternativ werden auch grafische oder formale Anfragesprachen verwendet, wobei aus Sicht der Modelle insbesondere deren logische Struktur (etwa beim Booleschen Retrieval) relevant ist. Die Fragerepräsentation wird dann in eine Fragebeschreibung überführt.
    Date
    24.11.2022 17:20:29
  10. Wu, Z.; Lu, C.; Zhao, Y.; Xie, J.; Zou, D.; Su, X.: ¬The protection of user preference privacy in personalized information retrieval : challenges and overviews (2021) 0.02
    0.020551383 = product of:
      0.07192984 = sum of:
        0.018122118 = weight(_text_:information in 520) [ClassicSimilarity], result of:
          0.018122118 = score(doc=520,freq=16.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.27429342 = fieldWeight in 520, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=520)
        0.05380772 = weight(_text_:retrieval in 520) [ClassicSimilarity], result of:
          0.05380772 = score(doc=520,freq=16.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.47264296 = fieldWeight in 520, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=520)
      0.2857143 = coord(2/7)
    
    Abstract
    This paper reviews a large number of research achievements relevant to user privacy protection in an untrusted network environment, and then analyzes and evaluates their application limitations in personalized information retrieval, to establish the conditional constraints that an effective approach for user preference privacy protection in personalized information retrieval should meet, thus providing a basic reference for the solution of this problem. First, based on the basic framework of a personalized information retrieval platform, we establish a complete set of constraints for user preference privacy protection in terms of security, usability, efficiency, and accuracy. Then, we comprehensively review the technical features for all kinds of popular methods for user privacy protection, and analyze their application limitations in personalized information retrieval, according to the constraints of preference privacy protection. The results show that personalized information retrieval has higher requirements for users' privacy protection, i.e., it is required to comprehensively improve the security of users' preference privacy on the untrusted server-side, under the precondition of not changing the platform, algorithm, efficiency, and accuracy of personalized information retrieval. However, all kinds of existing privacy methods still cannot meet the above requirements. This paper is an important study attempt to the problem of user preference privacy protection of personalized information retrieval, which can provide a basic reference and direction for the further study of the problem.
  11. Aizawa, A.; Kohlhase, M.: Mathematical information retrieval (2021) 0.02
    0.020344833 = product of:
      0.07120691 = sum of:
        0.017939983 = weight(_text_:information in 667) [ClassicSimilarity], result of:
          0.017939983 = score(doc=667,freq=8.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.27153665 = fieldWeight in 667, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=667)
        0.053266928 = weight(_text_:retrieval in 667) [ClassicSimilarity], result of:
          0.053266928 = score(doc=667,freq=8.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.46789268 = fieldWeight in 667, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=667)
      0.2857143 = coord(2/7)
    
    Abstract
    We present an overview of the NTCIR Math Tasks organized during NTCIR-10, 11, and 12. These tasks are primarily dedicated to techniques for searching mathematical content with formula expressions. In this chapter, we first summarize the task design and introduce test collections generated in the tasks. We also describe the features and main challenges of mathematical information retrieval systems and discuss future perspectives in the field.
    Series
    ¬The Information retrieval series, vol 43
    Source
    Evaluating information retrieval and access tasks. Eds.: Sakai, T., Oard, D., Kando, N. [https://doi.org/10.1007/978-981-15-5554-1_12]
  12. Chi, Y.; He, D.; Jeng, W.: Laypeople's source selection in online health information-seeking process (2020) 0.02
    0.020033062 = product of:
      0.04674381 = sum of:
        0.019221408 = weight(_text_:information in 34) [ClassicSimilarity], result of:
          0.019221408 = score(doc=34,freq=18.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.2909321 = fieldWeight in 34, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=34)
        0.019023903 = weight(_text_:retrieval in 34) [ClassicSimilarity], result of:
          0.019023903 = score(doc=34,freq=2.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.16710453 = fieldWeight in 34, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=34)
        0.008498495 = product of:
          0.025495486 = sum of:
            0.025495486 = weight(_text_:22 in 34) [ClassicSimilarity], result of:
              0.025495486 = score(doc=34,freq=2.0), product of:
                0.13179328 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037635546 = queryNorm
                0.19345059 = fieldWeight in 34, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=34)
          0.33333334 = coord(1/3)
      0.42857143 = coord(3/7)
    
    Abstract
    For laypeople, searching online health information resources can be challenging due to topic complexity and the large number of online sources with differing quality. The goal of this article is to examine, among all the available online sources, which online sources laypeople select to address their health-related information needs, and whether or how much the severity of a health condition influences their selection. Twenty-four participants were recruited individually, and each was asked (using a retrieval system called HIS) to search for information regarding a severe health condition and a mild health condition, respectively. The selected online health information sources were automatically captured by the HIS system and classified at both the website and webpage levels. Participants' selection behavior patterns were then plotted across the whole information-seeking process. Our results demonstrate that laypeople's source selection fluctuates during the health information-seeking process, and also varies by the severity of health conditions. This study reveals laypeople's real usage of different types of online health information sources, and engenders implications to the design of search engines, as well as the development of health literacy programs.
    Date
    12.11.2020 13:22:09
    Source
    Journal of the Association for Information Science and Technology. 71(2020) no.12, S.1484-1499
  13. Petras, V.; Womser-Hacker, C.: Evaluation im Information Retrieval (2023) 0.02
    0.019965585 = product of:
      0.06987955 = sum of:
        0.018833058 = weight(_text_:information in 808) [ClassicSimilarity], result of:
          0.018833058 = score(doc=808,freq=12.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.2850541 = fieldWeight in 808, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=808)
        0.05104649 = weight(_text_:retrieval in 808) [ClassicSimilarity], result of:
          0.05104649 = score(doc=808,freq=10.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.44838852 = fieldWeight in 808, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=808)
      0.2857143 = coord(2/7)
    
    Abstract
    Das Ziel einer Evaluation ist die Überprüfung, ob bzw. in welchem Ausmaß ein Informationssystem die an das System gestellten Anforderungen erfüllt. Informationssysteme können aus verschiedenen Perspektiven evaluiert werden. Für eine ganzheitliche Evaluation (als Synonym wird auch Evaluierung benutzt), die unterschiedliche Qualitätsaspekte betrachtet (z. B. wie gut ein System relevante Dokumente rankt, wie schnell ein System die Suche durchführt, wie die Ergebnispräsentation gestaltet ist oder wie Suchende durch das System geführt werden) und die Erfüllung mehrerer Anforderungen überprüft, empfiehlt es sich, sowohl eine perspektivische als auch methodische Triangulation (d. h. der Einsatz von mehreren Ansätzen zur Qualitätsüberprüfung) vorzunehmen. Im Information Retrieval (IR) konzentriert sich die Evaluation auf die Qualitätseinschätzung der Suchfunktion eines Information-Retrieval-Systems (IRS), wobei oft zwischen systemzentrierter und nutzerzentrierter Evaluation unterschieden wird. Dieses Kapitel setzt den Fokus auf die systemzentrierte Evaluation, während andere Kapitel dieses Handbuchs andere Evaluationsansätze diskutieren (s. Kapitel C 4 Interaktives Information Retrieval, C 7 Cross-Language Information Retrieval und D 1 Information Behavior).
  14. Hertzum, M.: Information seeking by experimentation : trying something out to discover what happens (2023) 0.02
    0.019628411 = product of:
      0.068699434 = sum of:
        0.02772151 = weight(_text_:information in 915) [ClassicSimilarity], result of:
          0.02772151 = score(doc=915,freq=26.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.41958824 = fieldWeight in 915, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=915)
        0.04097792 = product of:
          0.06146688 = sum of:
            0.030872298 = weight(_text_:29 in 915) [ClassicSimilarity], result of:
              0.030872298 = score(doc=915,freq=2.0), product of:
                0.13239008 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037635546 = queryNorm
                0.23319192 = fieldWeight in 915, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=915)
            0.030594582 = weight(_text_:22 in 915) [ClassicSimilarity], result of:
              0.030594582 = score(doc=915,freq=2.0), product of:
                0.13179328 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037635546 = queryNorm
                0.23214069 = fieldWeight in 915, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=915)
          0.6666667 = coord(2/3)
      0.2857143 = coord(2/7)
    
    Abstract
    Experimentation is the process of trying something out to discover what happens. It is a widespread information practice, yet often bypassed in information-behavior research. This article argues that experimentation complements prior knowledge, documents, and people as an important fourth class of information sources. Relative to the other classes, the distinguishing characteristics of experimentation are that it is a personal-as opposed to interpersonal-source and that it provides "backtalk." When the information seeker tries something out and then attends to the resulting situation, it is as though the materials of the situation talk back: They provide the information seeker with a situated and direct experience of the consequences of the tried-out options. In this way, experimentation involves obtaining information by creating it. It also involves turning material and behavioral processes into information interactions. Thereby, information seeking by experimentation is important to practical information literacy and extends information-behavior research with new insights on the interrelations between creating and seeking information.
    Date
    21. 3.2023 19:22:29
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.4, S.383-387
  15. Bedford, D.: Knowledge architectures : structures and semantics (2021) 0.02
    0.019423688 = product of:
      0.045321938 = sum of:
        0.017000053 = weight(_text_:information in 566) [ClassicSimilarity], result of:
          0.017000053 = score(doc=566,freq=22.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.25731003 = fieldWeight in 566, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=566)
        0.021523088 = weight(_text_:retrieval in 566) [ClassicSimilarity], result of:
          0.021523088 = score(doc=566,freq=4.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.18905719 = fieldWeight in 566, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=566)
        0.0067987964 = product of:
          0.020396389 = sum of:
            0.020396389 = weight(_text_:22 in 566) [ClassicSimilarity], result of:
              0.020396389 = score(doc=566,freq=2.0), product of:
                0.13179328 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037635546 = queryNorm
                0.15476047 = fieldWeight in 566, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=566)
          0.33333334 = coord(1/3)
      0.42857143 = coord(3/7)
    
    Abstract
    Knowledge Architectures reviews traditional approaches to managing information and explains why they need to adapt to support 21st-century information management and discovery. Exploring the rapidly changing environment in which information is being managed and accessed, the book considers how to use knowledge architectures, the basic structures and designs that underlie all of the parts of an effective information system, to best advantage. Drawing on 40 years of work with a variety of organizations, Bedford explains that failure to understand the structure behind any given system can be the difference between an effective solution and a significant and costly failure. Demonstrating that the information user environment has shifted significantly in the past 20 years, the book explains that end users now expect designs and behaviors that are much closer to the way they think, work, and act. Acknowledging how important it is that those responsible for developing an information or knowledge management system understand knowledge structures, the book goes beyond a traditional library science perspective and uses case studies to help translate the abstract and theoretical to the practical and concrete. Explaining the structures in a simple and intuitive way and providing examples that clearly illustrate the challenges faced by a range of different organizations, Knowledge Architectures is essential reading for those studying and working in library and information science, data science, systems development, database design, and search system architecture and engineering.
    Content
    Section 1 Context and purpose of knowledge architecture -- 1 Making the case for knowledge architecture -- 2 The landscape of knowledge assets -- 3 Knowledge architecture and design -- 4 Knowledge architecture reference model -- 5 Knowledge architecture segments -- Section 2 Designing for availability -- 6 Knowledge object modeling -- 7 Knowledge structures for encoding, formatting, and packaging -- 8 Functional architecture for identification and distinction -- 9 Functional architectures for knowledge asset disposition and destruction -- 10 Functional architecture designs for knowledge preservation and conservation -- Section 3 Designing for accessibility -- 11 Functional architectures for knowledge seeking and discovery -- 12 Functional architecture for knowledge search -- 13 Functional architecture for knowledge categorization -- 14 Functional architectures for indexing and keywording -- 15 Functional architecture for knowledge semantics -- 16 Functional architecture for knowledge abstraction and surrogation -- Section 4 Functional architectures to support knowledge consumption -- 17 Functional architecture for knowledge augmentation, derivation, and synthesis -- 18 Functional architecture to manage risk and harm -- 19 Functional architectures for knowledge authentication and provenance -- 20 Functional architectures for securing knowledge assets -- 21 Functional architectures for authorization and asset management -- Section 5 Pulling it all together - the big picture knowledge architecture -- 22 Functional architecture for knowledge metadata and metainformation -- 23 The whole knowledge architecture - pulling it all together
    LCSH
    Information science
    Information storage and retrieval systems / Management
    Subject
    Information science
    Information storage and retrieval systems / Management
  16. Marques Redigolo, F.; Lopes Fujita, M.S.; Gil-Leiva, I.: Guidelines for subject analysis in subject cataloging (2022) 0.02
    0.018854024 = product of:
      0.04399272 = sum of:
        0.010873271 = weight(_text_:information in 736) [ClassicSimilarity], result of:
          0.010873271 = score(doc=736,freq=4.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.16457605 = fieldWeight in 736, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=736)
        0.022828683 = weight(_text_:retrieval in 736) [ClassicSimilarity], result of:
          0.022828683 = score(doc=736,freq=2.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.20052543 = fieldWeight in 736, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=736)
        0.010290766 = product of:
          0.030872298 = sum of:
            0.030872298 = weight(_text_:29 in 736) [ClassicSimilarity], result of:
              0.030872298 = score(doc=736,freq=2.0), product of:
                0.13239008 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037635546 = queryNorm
                0.23319192 = fieldWeight in 736, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=736)
          0.33333334 = coord(1/3)
      0.42857143 = coord(3/7)
    
    Abstract
    The representation of information in subject cataloging as a result of subject analysis will depend on the cataloger's prior knowledge, influenced by subjectivity. The subject analysis in cataloging is a central theme of this investigation with the aim to elaborate guidelines for subject analysis in cataloging. For this purpose, how books are cataloged in university libraries has been verified. The Individual Verbal Protocol was applied with catalogers from Brazilian and Spanish University Libraries. Directions for the elements and variables of the subject analysis and procedures for good development were obtained to constitute the Guidelines of Subject Analysis in Cataloging. It is concluded that the guidelines formed by four sections are indicated for incorporation in subject cataloging procedure manuals for the purpose of improving the levels of representation and information retrieval results.
    Date
    29. 9.2022 18:14:27
  17. Amirhosseini, M.: ¬A novel method for ranking knowledge organization systems (KOSs) based on cognition states (2022) 0.02
    0.018042339 = product of:
      0.04209879 = sum of:
        0.008877989 = weight(_text_:information in 1105) [ClassicSimilarity], result of:
          0.008877989 = score(doc=1105,freq=6.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.1343758 = fieldWeight in 1105, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1105)
        0.026360294 = weight(_text_:retrieval in 1105) [ClassicSimilarity], result of:
          0.026360294 = score(doc=1105,freq=6.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.23154683 = fieldWeight in 1105, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=1105)
        0.006860511 = product of:
          0.020581532 = sum of:
            0.020581532 = weight(_text_:29 in 1105) [ClassicSimilarity], result of:
              0.020581532 = score(doc=1105,freq=2.0), product of:
                0.13239008 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037635546 = queryNorm
                0.15546128 = fieldWeight in 1105, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1105)
          0.33333334 = coord(1/3)
      0.42857143 = coord(3/7)
    
    Abstract
    The purpose of this article is to delineate the process of evolution of know­ledge organization systems (KOSs) through identification of principles of unity such as internal and external unity in organizing the structure of KOSs to achieve content storage and retrieval purposes and to explain a novel method used in ranking of KOSs by proposing the principle of rank unity. Different types of KOSs which are addressed in this article include dictionaries, Roget's thesaurus, thesauri, micro, macro, and meta-thesaurus, ontologies, and lower, middle, and upper-level ontologies. This article relied on dialectic models to clarify the ideas in Kant's know­ledge theory. This is done by identifying logical relationships between categories (i.e., Thesis, antithesis, and synthesis) in the creation of data, information, and know­ledge in the human mind. The Analysis has adapted a historical methodology, more specifically a documentary method, as its reasoning process to propose a conceptual model for ranking KOSs. The study endeavors to explain the main elements of data, information, and know­ledge along with engineering mechanisms such as data, information, and know­ledge engineering in developing the structure of KOSs and also aims to clarify their influence on content storage and retrieval performance. KOSs have followed related principles of order to achieve an internal order, which could be examined by analyzing the principle of internal unity in know­ledge organizations. The principle of external unity leads us to the necessity of compatibility and interoperability between different types of KOSs to achieve semantic harmonization in increasing the performance of content storage and retrieval. Upon introduction of the principle of rank unity, a ranking method of KOSs utilizing cognition states as criteria could be considered to determine the position of each know­ledge organization with respect to others. The related criteria of the principle of rank unity- cognition states- are derived from Immanuel Kant's epistemology. The research results showed that KOSs, while having defined positions in cognition states, specific principles of order, related operational mechanisms, and related principles of unity in achieving their specific purposes, have benefited from the developmental experiences of previous KOSs, and further, their developmental processes owe to the experiences and methods of their previous generations.
    Date
    19.11.2023 19:07:29
  18. Dang, E.K.F.; Luk, R.W.P.; Allan, J.: ¬A retrieval model family based on the probability ranking principle for ad hoc retrieval (2022) 0.02
    0.017781978 = product of:
      0.06223692 = sum of:
        0.0089699915 = weight(_text_:information in 638) [ClassicSimilarity], result of:
          0.0089699915 = score(doc=638,freq=2.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.13576832 = fieldWeight in 638, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=638)
        0.053266928 = weight(_text_:retrieval in 638) [ClassicSimilarity], result of:
          0.053266928 = score(doc=638,freq=8.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.46789268 = fieldWeight in 638, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=638)
      0.2857143 = coord(2/7)
    
    Abstract
    Many successful retrieval models are derived based on or conform to the probability ranking principle (PRP). We present a new derivation of a document ranking function given by the probability of relevance of a document, conforming to the PRP. Our formulation yields a family of retrieval models, called probabilistic binary relevance (PBR) models, with various instantiations obtained by different probability estimations. By extensive experiments on a range of TREC collections, improvement of the PBR models over some established baselines with statistical significance is observed, especially in the large Clueweb09 Cat-B collection.
    Source
    Journal of the Association for Information Science and Technology. 73(2022) no.8, S.1140-1154
  19. Mandl, T.; Diem, S.: Bild- und Video-Retrieval (2023) 0.02
    0.017691363 = product of:
      0.061919764 = sum of:
        0.010873271 = weight(_text_:information in 801) [ClassicSimilarity], result of:
          0.010873271 = score(doc=801,freq=4.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.16457605 = fieldWeight in 801, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=801)
        0.05104649 = weight(_text_:retrieval in 801) [ClassicSimilarity], result of:
          0.05104649 = score(doc=801,freq=10.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.44838852 = fieldWeight in 801, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=801)
      0.2857143 = coord(2/7)
    
    Abstract
    Digitale Bildverarbeitung hat längst den Alltag erreicht: Automatisierte Passkontrollen, Gesichtserkennung auf dem Mobiltelefon und Apps zum Bestimmen von Pflanzen anhand von Fotos sind nur einige Beispiele für den Einsatz dieser Technologie. Digitale Bildverarbeitung zur Analyse der Inhalte von Bildern kann den Zugang zu Wissen verbessern und ist somit relevant für die Informationswissenschaft. Häufig greifen Systeme bei der Suche nach visueller Information nach wie vor auf beschreibende Metadaten zu, weil diese sprachbasierten Methoden für Massendaten meist robust funktionieren. Der Fokus liegt in diesem Beitrag auf automatischer Inhaltsanalyse von Bildern (content based image retrieval) und nicht auf reinen Metadaten-Systemen, welche Wörter für die Beschreibung von Bildern nutzen (s. Kapitel B 9 Metadaten) und somit letztlich Text-Retrieval ausführen (concept based image retrieval) (s. Kapitel C 1 Informationswissenschaftliche Perspektiven des Information Retrieval).
  20. Tramullas, J.; Garrido-Picazo, P.; Sánchez-Casabón, A.I.: Use of Wikipedia categories on information retrieval research : a brief review (2020) 0.02
    0.017438427 = product of:
      0.061034493 = sum of:
        0.015377128 = weight(_text_:information in 5365) [ClassicSimilarity], result of:
          0.015377128 = score(doc=5365,freq=8.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.23274569 = fieldWeight in 5365, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5365)
        0.045657367 = weight(_text_:retrieval in 5365) [ClassicSimilarity], result of:
          0.045657367 = score(doc=5365,freq=8.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.40105087 = fieldWeight in 5365, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=5365)
      0.2857143 = coord(2/7)
    
    Abstract
    Wikipedia categories, a classification scheme built for organizing and describing Wikpedia articles, are being applied in computer science research. This paper adopts a systematic literature review approach, in order to identify different approaches and uses of Wikipedia categories in information retrieval research. Several types of work are identified, depending on the intrinsic study of the categories structure, or its use as a tool for the processing and analysis of other documentary corpus different to Wikipedia. Information retrieval is identified as one of the major areas of use, in particular its application in the refinement and improvement of search expressions, and the construction of textual corpus. However, the set of available works shows that in many cases research approaches applied and results obtained can be integrated into a comprehensive and inclusive concept of information retrieval.

Languages

  • e 664
  • d 144
  • pt 3
  • m 2
  • sp 1
  • More… Less…

Types

  • a 761
  • el 85
  • m 27
  • p 8
  • s 6
  • A 1
  • EL 1
  • x 1
  • More… Less…

Themes

Subjects

Classifications