Search (802 results, page 2 of 41)

  • × year_i:[2020 TO 2030}
  1. Wartena, C.; Golub, K.: Evaluierung von Verschlagwortung im Kontext des Information Retrievals (2021) 0.01
    0.0075341864 = product of:
      0.035159536 = sum of:
        0.013444485 = weight(_text_:system in 376) [ClassicSimilarity], result of:
          0.013444485 = score(doc=376,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.17398985 = fieldWeight in 376, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=376)
        0.004176737 = weight(_text_:information in 376) [ClassicSimilarity], result of:
          0.004176737 = score(doc=376,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.09697737 = fieldWeight in 376, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=376)
        0.017538311 = weight(_text_:retrieval in 376) [ClassicSimilarity], result of:
          0.017538311 = score(doc=376,freq=4.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.23632148 = fieldWeight in 376, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=376)
      0.21428572 = coord(3/14)
    
    Abstract
    Dieser Beitrag möchte einen Überblick über die in der Literatur diskutierten Möglichkeiten, Herausforderungen und Grenzen geben, Retrieval als eine extrinsische Evaluierungsmethode für die Ergebnisse verbaler Sacherschließung zu nutzen. Die inhaltliche Erschließung im Allgemeinen und die Verschlagwortung im Besonderen können intrinsisch oder extrinsisch evaluiert werden. Die intrinsische Evaluierung bezieht sich auf Eigenschaften der Erschließung, von denen vermutet wird, dass sie geeignete Indikatoren für die Qualität der Erschließung sind, wie formale Einheitlichkeit (im Hinblick auf die Anzahl zugewiesener Deskriptoren pro Dokument, auf die Granularität usw.), Konsistenz oder Übereinstimmung der Ergebnisse verschiedener Erschließer:innen. Bei einer extrinsischen Evaluierung geht es darum, die Qualität der gewählten Deskriptoren daran zu messen, wie gut sie sich tatsächlich bei der Suche bewähren. Obwohl die extrinsische Evaluierung direktere Auskunft darüber gibt, ob die Erschließung ihren Zweck erfüllt, und daher den Vorzug verdienen sollte, ist sie kompliziert und oft problematisch. In einem Retrievalsystem greifen verschiedene Algorithmen und Datenquellen in vielschichtiger Weise ineinander und interagieren bei der Evaluierung darüber hinaus noch mit Nutzer:innen und Rechercheaufgaben. Die Evaluierung einer Komponente im System kann nicht einfach dadurch vorgenommen werden, dass man sie austauscht und mit einer anderen Komponente vergleicht, da die gleiche Ressource oder der gleiche Algorithmus sich in unterschiedlichen Umgebungen unterschiedlich verhalten kann. Wir werden relevante Evaluierungsansätze vorstellen und diskutieren, und zum Abschluss einige Empfehlungen für die Evaluierung von Verschlagwortung im Kontext von Retrieval geben.
  2. Ghosh, S.S.; Das, S.; Chatterjee, S.K.: Human-centric faceted approach for ontology construction (2020) 0.01
    0.0073284465 = product of:
      0.034199417 = sum of:
        0.013444485 = weight(_text_:system in 5731) [ClassicSimilarity], result of:
          0.013444485 = score(doc=5731,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.17398985 = fieldWeight in 5731, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5731)
        0.008353474 = weight(_text_:information in 5731) [ClassicSimilarity], result of:
          0.008353474 = score(doc=5731,freq=8.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.19395474 = fieldWeight in 5731, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5731)
        0.012401459 = weight(_text_:retrieval in 5731) [ClassicSimilarity], result of:
          0.012401459 = score(doc=5731,freq=2.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.16710453 = fieldWeight in 5731, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5731)
      0.21428572 = coord(3/14)
    
    Abstract
    In this paper, we propose an ontology building method, called human-centric faceted approach for ontology construction (HCFOC). HCFOC uses the human-centric approach, improvised with the idea of selective dissemination of information (SDI), to deal with context. Further, this ontology construction process makes use of facet analysis and an analytico-synthetic classification approach. This novel fusion contributes to the originality of HCFOC and distinguishes it from other existing ontology construction methodologies. Based on HCFOC, an ontology of the tourism domain has been designed using the Protégé-5.5.0 ontology editor. The HCFOC methodology has provided the necessary flexibility, extensibility, robustness and has facilitated the capturing of background knowledge. It models the tourism ontology in such a way that it is able to deal with the context of a tourist's information need with precision. This is evident from the result that more than 90% of the user's queries were successfully met. The use of domain knowledge and techniques from both library and information science and computer science has helped in the realization of the desired purpose of this ontology construction process. It is envisaged that HCFOC will have implications for ontology developers. The demonstrated tourism ontology can support any tourism information retrieval system.
  3. Strecker, D.: Dataset Retrieval : Informationsverhalten von Datensuchenden und das Ökosystem von Data-Retrieval-Systemen (2022) 0.01
    0.007293084 = product of:
      0.051051587 = sum of:
        0.006682779 = weight(_text_:information in 4021) [ClassicSimilarity], result of:
          0.006682779 = score(doc=4021,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.1551638 = fieldWeight in 4021, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=4021)
        0.044368807 = weight(_text_:retrieval in 4021) [ClassicSimilarity], result of:
          0.044368807 = score(doc=4021,freq=10.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.59785134 = fieldWeight in 4021, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=4021)
      0.14285715 = coord(2/14)
    
    Abstract
    Verschiedene Stakeholder fordern eine bessere Verfügbarkeit von Forschungsdaten. Der Erfolg dieser Initiativen hängt wesentlich von einer guten Auffindbarkeit der publizierten Datensätze ab, weshalb Dataset Retrieval an Bedeutung gewinnt. Dataset Retrieval ist eine Sonderform von Information Retrieval, die sich mit dem Auffinden von Datensätzen befasst. Dieser Beitrag fasst aktuelle Forschungsergebnisse über das Informationsverhalten von Datensuchenden zusammen. Anschließend werden beispielhaft zwei Suchdienste verschiedener Ausrichtung vorgestellt und verglichen. Um darzulegen, wie diese Dienste ineinandergreifen, werden inhaltliche Überschneidungen von Datenbeständen genutzt, um den Metadatenaustausch zu analysieren.
  4. Sa, N.; Yuan, X.(J.): Improving the effectiveness of voice search systems through partial query modification (2022) 0.01
    0.007234883 = product of:
      0.050644178 = sum of:
        0.045632094 = weight(_text_:system in 635) [ClassicSimilarity], result of:
          0.045632094 = score(doc=635,freq=16.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.5905411 = fieldWeight in 635, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=635)
        0.0050120843 = weight(_text_:information in 635) [ClassicSimilarity], result of:
          0.0050120843 = score(doc=635,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.116372846 = fieldWeight in 635, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=635)
      0.14285715 = coord(2/14)
    
    Abstract
    This paper addresses the importance of improving the effectiveness of voice search systems through partial query modification. A user-centered experiment was designed to compare the effectiveness of an experimental system using partial query modification feature to a baseline system in which users could issue complete queries only, with 32 participants each searching on eight different tasks. The results indicate that the participants spent significantly more time preparing the modification but significantly less time speaking the modification by using the experimental system than by using the baseline system. The participants found that the experimental system (a) was more effective, (b) gave them more control, (c) was easier for the search tasks, and (d) saved them time than the baseline system. The results contribute to improving future voice search system design and benefiting the research community in general. System implications and future work were discussed.
    Source
    Journal of the Association for Information Science and Technology. 73(2022) no.8, S.1092-1105
  5. Qin, H.; Wang, H.; Johnson, A.: Understanding the information needs and information-seeking behaviours of new-generation engineering designers for effective knowledge management (2020) 0.01
    0.0071632313 = product of:
      0.033428412 = sum of:
        0.010755588 = weight(_text_:system in 181) [ClassicSimilarity], result of:
          0.010755588 = score(doc=181,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.13919188 = fieldWeight in 181, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03125 = fieldNorm(doc=181)
        0.016024742 = weight(_text_:information in 181) [ClassicSimilarity], result of:
          0.016024742 = score(doc=181,freq=46.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.37206972 = fieldWeight in 181, product of:
              6.78233 = tf(freq=46.0), with freq of:
                46.0 = termFreq=46.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=181)
        0.0066480828 = product of:
          0.0132961655 = sum of:
            0.0132961655 = weight(_text_:22 in 181) [ClassicSimilarity], result of:
              0.0132961655 = score(doc=181,freq=2.0), product of:
                0.085914485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02453417 = queryNorm
                0.15476047 = fieldWeight in 181, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=181)
          0.5 = coord(1/2)
      0.21428572 = coord(3/14)
    
    Abstract
    Purpose This paper aims to explore the information needs and information-seeking behaviours of the new generation of engineering designers. A survey study is used to approach what their information needs are, how these needs change during an engineering design project and how their information-seeking behaviours have been influenced by the newly developed information technologies (ITs). Through an in-depth analysis of the survey results, the key functions have been identified for the next-generation management systems. Design/methodology/approach The paper first proposed four hypotheses on the information needs and information-seeking behaviours of young engineers. Then, a survey study was undertaken to understand their information usage in terms of the information needs and information-seeking behaviours during a complete engineering design process. Through analysing the survey results, several findings were obtained and on this basis, further comparisons were made to discuss and evaluate the hypotheses. Findings The paper has revealed that the engineering designers' information needs will evolve throughout the engineering design project; thus, they should be assisted at several different levels. Although they intend to search information and knowledge on know-what and know-how, what they really require is the know-why knowledge in order to help them complete design tasks. Also, the paper has shown how the newly developed ITs and web-based applications have influenced the engineers' information-seeking practices. Research limitations/implications The research subjects chosen in this study are engineering students in universities who, although not as experienced as engineers in companies, do go through a complete design process with the tasks similar to industrial scenarios. In addition, the focus of this study is to understand the information-seeking behaviours of a new generation of design engineers, so that the development of next-generation information and knowledge management systems can be well informed. In this sense, the results obtained do reveal some new knowledge about the information-seeking behaviours during a general design process. Practical implications This paper first identifies the information needs and information-seeking behaviours of the new generation of engineering designers. On this basis, the varied ways to meet these needs and behaviours are discussed and elaborated. This intends to provide the key characteristics for the development of the next-generation knowledge management system for engineering design projects. Originality/value This paper proposes a novel means of exploring the future engineers' information needs and information-seeking behaviours in a collaborative working environment. It also characterises the key features and functions for the next generation of knowledge management systems for engineering design.
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 72(2020) no.6, S.853-868
  6. Ali, C.B.; Haddad, H.; Slimani, Y.: Multi-word terms selection for information retrieval (2022) 0.01
    0.0068041594 = product of:
      0.031752743 = sum of:
        0.013444485 = weight(_text_:system in 900) [ClassicSimilarity], result of:
          0.013444485 = score(doc=900,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.17398985 = fieldWeight in 900, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=900)
        0.005906798 = weight(_text_:information in 900) [ClassicSimilarity], result of:
          0.005906798 = score(doc=900,freq=4.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.13714671 = fieldWeight in 900, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=900)
        0.012401459 = weight(_text_:retrieval in 900) [ClassicSimilarity], result of:
          0.012401459 = score(doc=900,freq=2.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.16710453 = fieldWeight in 900, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=900)
      0.21428572 = coord(3/14)
    
    Abstract
    Purpose A number of approaches and algorithms have been proposed over the years as a basis for automatic indexing. Many of these approaches suffer from precision inefficiency at low recall. The choice of indexing units has a great impact on search system effectiveness. The authors dive beyond simple terms indexing to propose a framework for multi-word terms (MWT) filtering and indexing. Design/methodology/approach In this paper, the authors rely on ranking MWT to filter them, keeping the most effective ones for the indexing process. The proposed model is based on filtering MWT according to their ability to capture the document topic and distinguish between different documents from the same collection. The authors rely on the hypothesis that the best MWT are those that achieve the greatest association degree. The experiments are carried out with English and French languages data sets. Findings The results indicate that this approach achieved precision enhancements at low recall, and it performed better than more advanced models based on terms dependencies. Originality/value Using and testing different association measures to select MWT that best describe the documents to enhance the precision in the first retrieved documents.
    Source
    Information discovery and delivery 51(2022) no.1, S.xx-xx
  7. Wang, J.; Halffman, W.; Zhang, Y.H.: Sorting out journals : the proliferation of journal lists in China (2023) 0.01
    0.006750046 = product of:
      0.031500213 = sum of:
        0.019013375 = weight(_text_:system in 1055) [ClassicSimilarity], result of:
          0.019013375 = score(doc=1055,freq=4.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.24605882 = fieldWeight in 1055, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1055)
        0.004176737 = weight(_text_:information in 1055) [ClassicSimilarity], result of:
          0.004176737 = score(doc=1055,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.09697737 = fieldWeight in 1055, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1055)
        0.008310104 = product of:
          0.016620208 = sum of:
            0.016620208 = weight(_text_:22 in 1055) [ClassicSimilarity], result of:
              0.016620208 = score(doc=1055,freq=2.0), product of:
                0.085914485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02453417 = queryNorm
                0.19345059 = fieldWeight in 1055, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1055)
          0.5 = coord(1/2)
      0.21428572 = coord(3/14)
    
    Abstract
    Journal lists are instruments to categorize, compare, and assess research and scholarly publications. Our study investigates the remarkable proliferation of such journal lists in China, analyses their underlying values, quality criteria and ranking principles, and specifies how concerns specific to the Chinese research policy and publishing system inform these lists. Discouraged lists of "bad journals" reflect concerns over inferior research publications, but also the involved drain on public resources. Endorsed lists of "good journals" are based on criteria valued in research policy, reflecting the distinctive administrative logic of state-led Chinese research and publishing policy, ascribing worth to scientific journals for its specific national and institutional needs. In this regard, the criteria used for journal list construction are contextual and reflect the challenges of public resource allocation in a market-led publication system. Chinese journal lists therefore reflect research policy changes, such as a shift away from output-dominated research evaluation, the specific concerns about research misconduct, and balancing national research needs against international standards, resulting in distinctly Chinese quality criteria. However, contrasting concerns and inaccuracies lead to contradictions in the "qualify" and "disqualify" binary logic and demonstrate inherent tensions and limitations in journal lists as policy tools.
    Date
    22. 9.2023 16:39:23
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.10, S.1207-1228
  8. Wu, Z.; Lu, C.; Zhao, Y.; Xie, J.; Zou, D.; Su, X.: ¬The protection of user preference privacy in personalized information retrieval : challenges and overviews (2021) 0.01
    0.006698603 = product of:
      0.046890218 = sum of:
        0.011813596 = weight(_text_:information in 520) [ClassicSimilarity], result of:
          0.011813596 = score(doc=520,freq=16.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.27429342 = fieldWeight in 520, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=520)
        0.035076622 = weight(_text_:retrieval in 520) [ClassicSimilarity], result of:
          0.035076622 = score(doc=520,freq=16.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.47264296 = fieldWeight in 520, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=520)
      0.14285715 = coord(2/14)
    
    Abstract
    This paper reviews a large number of research achievements relevant to user privacy protection in an untrusted network environment, and then analyzes and evaluates their application limitations in personalized information retrieval, to establish the conditional constraints that an effective approach for user preference privacy protection in personalized information retrieval should meet, thus providing a basic reference for the solution of this problem. First, based on the basic framework of a personalized information retrieval platform, we establish a complete set of constraints for user preference privacy protection in terms of security, usability, efficiency, and accuracy. Then, we comprehensively review the technical features for all kinds of popular methods for user privacy protection, and analyze their application limitations in personalized information retrieval, according to the constraints of preference privacy protection. The results show that personalized information retrieval has higher requirements for users' privacy protection, i.e., it is required to comprehensively improve the security of users' preference privacy on the untrusted server-side, under the precondition of not changing the platform, algorithm, efficiency, and accuracy of personalized information retrieval. However, all kinds of existing privacy methods still cannot meet the above requirements. This paper is an important study attempt to the problem of user preference privacy protection of personalized information retrieval, which can provide a basic reference and direction for the further study of the problem.
  9. Lee, D.J.; Stvilia, B.; Ha, S.; Hahn, D.: ¬The structure and priorities of researchers' scholarly profile maintenance activities : a case of institutional research information management system (2023) 0.01
    0.0066630123 = product of:
      0.031094057 = sum of:
        0.013444485 = weight(_text_:system in 884) [ClassicSimilarity], result of:
          0.013444485 = score(doc=884,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.17398985 = fieldWeight in 884, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=884)
        0.009339468 = weight(_text_:information in 884) [ClassicSimilarity], result of:
          0.009339468 = score(doc=884,freq=10.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.21684799 = fieldWeight in 884, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=884)
        0.008310104 = product of:
          0.016620208 = sum of:
            0.016620208 = weight(_text_:22 in 884) [ClassicSimilarity], result of:
              0.016620208 = score(doc=884,freq=2.0), product of:
                0.085914485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02453417 = queryNorm
                0.19345059 = fieldWeight in 884, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=884)
          0.5 = coord(1/2)
      0.21428572 = coord(3/14)
    
    Abstract
    Research information management systems (RIMS) have become critical components of information technology infrastructure on university campuses. They are used not just for sharing and promoting faculty research, but also for conducting faculty evaluation and development, facilitating research collaborations, identifying mentors for student projects, and expert consultants for local businesses. This study is one of the first empirical investigations of the structure of researchers' scholarly profile maintenance activities in a nonmandatory institutional RIMS. By analyzing the RIMS's log data, we identified 11 tasks researchers performed when updating their profiles. These tasks were further grouped into three activities: (a) adding publication, (b) enhancing researcher identity, and (c) improving research discoverability. In addition, we found that junior researchers and female researchers were more engaged in maintaining their RIMS profiles than senior researchers and male researchers. The results provide insights for designing profile maintenance action templates for institutional RIMS that are tailored to researchers' characteristics and help enhance researchers' engagement in the curation of their research information. This also suggests that female and junior researchers can serve as early adopters of institutional RIMS.
    Date
    22. 1.2023 18:43:02
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.2, S.186-204
  10. Aizawa, A.; Kohlhase, M.: Mathematical information retrieval (2021) 0.01
    0.0066312784 = product of:
      0.046418946 = sum of:
        0.011694863 = weight(_text_:information in 667) [ClassicSimilarity], result of:
          0.011694863 = score(doc=667,freq=8.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.27153665 = fieldWeight in 667, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=667)
        0.034724083 = weight(_text_:retrieval in 667) [ClassicSimilarity], result of:
          0.034724083 = score(doc=667,freq=8.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.46789268 = fieldWeight in 667, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=667)
      0.14285715 = coord(2/14)
    
    Abstract
    We present an overview of the NTCIR Math Tasks organized during NTCIR-10, 11, and 12. These tasks are primarily dedicated to techniques for searching mathematical content with formula expressions. In this chapter, we first summarize the task design and introduce test collections generated in the tasks. We also describe the features and main challenges of mathematical information retrieval systems and discuss future perspectives in the field.
    Series
    ¬The Information retrieval series, vol 43
    Source
    Evaluating information retrieval and access tasks. Eds.: Sakai, T., Oard, D., Kando, N. [https://doi.org/10.1007/978-981-15-5554-1_12]
  11. Jiang, Y.; Meng, R.; Huang, Y.; Lu, W.; Liu, J.: Generating keyphrases for readers : a controllable keyphrase generation framework (2023) 0.01
    0.0062282225 = product of:
      0.029065037 = sum of:
        0.008353474 = weight(_text_:information in 1012) [ClassicSimilarity], result of:
          0.008353474 = score(doc=1012,freq=8.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.19395474 = fieldWeight in 1012, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1012)
        0.012401459 = weight(_text_:retrieval in 1012) [ClassicSimilarity], result of:
          0.012401459 = score(doc=1012,freq=2.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.16710453 = fieldWeight in 1012, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1012)
        0.008310104 = product of:
          0.016620208 = sum of:
            0.016620208 = weight(_text_:22 in 1012) [ClassicSimilarity], result of:
              0.016620208 = score(doc=1012,freq=2.0), product of:
                0.085914485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02453417 = queryNorm
                0.19345059 = fieldWeight in 1012, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1012)
          0.5 = coord(1/2)
      0.21428572 = coord(3/14)
    
    Abstract
    With the wide application of keyphrases in many Information Retrieval (IR) and Natural Language Processing (NLP) tasks, automatic keyphrase prediction has been emerging. However, these statistically important phrases are contributing increasingly less to the related tasks because the end-to-end learning mechanism enables models to learn the important semantic information of the text directly. Similarly, keyphrases are of little help for readers to quickly grasp the paper's main idea because the relationship between the keyphrase and the paper is not explicit to readers. Therefore, we propose to generate keyphrases with specific functions for readers to bridge the semantic gap between them and the information producers, and verify the effectiveness of the keyphrase function for assisting users' comprehension with a user experiment. A controllable keyphrase generation framework (the CKPG) that uses the keyphrase function as a control code to generate categorized keyphrases is proposed and implemented based on Transformer, BART, and T5, respectively. For the Computer Science domain, the Macro-avgs of , , and on the Paper with Code dataset are up to 0.680, 0.535, and 0.558, respectively. Our experimental results indicate the effectiveness of the CKPG models.
    Date
    22. 6.2023 14:55:20
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.7, S.759-774
  12. Bullard, J.; Dierking, A.; Grundner, A.: Centring LGBT2QIA+ subjects in knowledge organization systems (2020) 0.01
    0.0060341274 = product of:
      0.04223889 = sum of:
        0.032266766 = weight(_text_:system in 5996) [ClassicSimilarity], result of:
          0.032266766 = score(doc=5996,freq=8.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.41757566 = fieldWeight in 5996, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=5996)
        0.009972124 = product of:
          0.019944249 = sum of:
            0.019944249 = weight(_text_:22 in 5996) [ClassicSimilarity], result of:
              0.019944249 = score(doc=5996,freq=2.0), product of:
                0.085914485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02453417 = queryNorm
                0.23214069 = fieldWeight in 5996, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5996)
          0.5 = coord(1/2)
      0.14285715 = coord(2/14)
    
    Abstract
    This paper contains a report of two interdependent knowledge organization (KO) projects for an LGBT2QIA+ library. The authors, in the context of volunteer library work for an independent library, redesigned the classification system and subject cataloguing guidelines to centre LGBT2QIA+ subjects. We discuss the priorities of creating and maintaining knowledge organization systems for a historically marginalized community and address the challenge that queer subjectivity poses to the goals of KO. The classification system features a focus on identity and physically reorganizes the library space in a way that accounts for the multiple and overlapping labels that constitute the currently articulated boundaries of this community. The subject heading system focuses on making visible topics and elements of identity made invisible by universal systems and by the newly implemented classification system. We discuss how this project may inform KO for other marginalized subjects, particularly through process and documentation that prioritizes transparency and the acceptance of an unfinished endpoint for queer KO.
    Date
    6.10.2020 21:22:33
  13. Qi, Q.; Hessen, D.J.; Heijden, P.G.M. van der: Improving information retrieval through correspondenceanalysis instead of latent semantic analysis (2023) 0.01
    0.0058529805 = product of:
      0.040970862 = sum of:
        0.011207362 = weight(_text_:information in 1045) [ClassicSimilarity], result of:
          0.011207362 = score(doc=1045,freq=10.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.2602176 = fieldWeight in 1045, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1045)
        0.029763501 = weight(_text_:retrieval in 1045) [ClassicSimilarity], result of:
          0.029763501 = score(doc=1045,freq=8.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.40105087 = fieldWeight in 1045, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=1045)
      0.14285715 = coord(2/14)
    
    Abstract
    The initial dimensions extracted by latent semantic analysis (LSA) of a document-term matrixhave been shown to mainly display marginal effects, which are irrelevant for informationretrieval. To improve the performance of LSA, usually the elements of the raw document-term matrix are weighted and the weighting exponent of singular values can be adjusted.An alternative information retrieval technique that ignores the marginal effects is correspon-dence analysis (CA). In this paper, the information retrieval performance of LSA and CA isempirically compared. Moreover, it is explored whether the two weightings also improve theperformance of CA. The results for four empirical datasets show that CA always performsbetter than LSA. Weighting the elements of the raw data matrix can improve CA; however,it is data dependent and the improvement is small. Adjusting the singular value weightingexponent often improves the performance of CA; however, the extent of the improvementdepends on the dataset and the number of dimensions. (PDF) Improving information retrieval through correspondence analysis instead of latent semantic analysis.
    Source
    Journal of intelligent information systems [https://doi.org/10.1007/s10844-023-00815-y]
  14. Dang, E.K.F.; Luk, R.W.P.; Allan, J.: ¬A retrieval model family based on the probability ranking principle for ad hoc retrieval (2022) 0.01
    0.005795931 = product of:
      0.040571515 = sum of:
        0.0058474317 = weight(_text_:information in 638) [ClassicSimilarity], result of:
          0.0058474317 = score(doc=638,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.13576832 = fieldWeight in 638, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=638)
        0.034724083 = weight(_text_:retrieval in 638) [ClassicSimilarity], result of:
          0.034724083 = score(doc=638,freq=8.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.46789268 = fieldWeight in 638, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=638)
      0.14285715 = coord(2/14)
    
    Abstract
    Many successful retrieval models are derived based on or conform to the probability ranking principle (PRP). We present a new derivation of a document ranking function given by the probability of relevance of a document, conforming to the PRP. Our formulation yields a family of retrieval models, called probabilistic binary relevance (PBR) models, with various instantiations obtained by different probability estimations. By extensive experiments on a range of TREC collections, improvement of the PBR models over some established baselines with statistical significance is observed, especially in the large Clueweb09 Cat-B collection.
    Source
    Journal of the Association for Information Science and Technology. 73(2022) no.8, S.1140-1154
  15. Mandl, T.; Diem, S.: Bild- und Video-Retrieval (2023) 0.01
    0.005766395 = product of:
      0.040364765 = sum of:
        0.0070881573 = weight(_text_:information in 801) [ClassicSimilarity], result of:
          0.0070881573 = score(doc=801,freq=4.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.16457605 = fieldWeight in 801, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=801)
        0.033276606 = weight(_text_:retrieval in 801) [ClassicSimilarity], result of:
          0.033276606 = score(doc=801,freq=10.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.44838852 = fieldWeight in 801, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=801)
      0.14285715 = coord(2/14)
    
    Abstract
    Digitale Bildverarbeitung hat längst den Alltag erreicht: Automatisierte Passkontrollen, Gesichtserkennung auf dem Mobiltelefon und Apps zum Bestimmen von Pflanzen anhand von Fotos sind nur einige Beispiele für den Einsatz dieser Technologie. Digitale Bildverarbeitung zur Analyse der Inhalte von Bildern kann den Zugang zu Wissen verbessern und ist somit relevant für die Informationswissenschaft. Häufig greifen Systeme bei der Suche nach visueller Information nach wie vor auf beschreibende Metadaten zu, weil diese sprachbasierten Methoden für Massendaten meist robust funktionieren. Der Fokus liegt in diesem Beitrag auf automatischer Inhaltsanalyse von Bildern (content based image retrieval) und nicht auf reinen Metadaten-Systemen, welche Wörter für die Beschreibung von Bildern nutzen (s. Kapitel B 9 Metadaten) und somit letztlich Text-Retrieval ausführen (concept based image retrieval) (s. Kapitel C 1 Informationswissenschaftliche Perspektiven des Information Retrieval).
  16. Tramullas, J.; Garrido-Picazo, P.; Sánchez-Casabón, A.I.: Use of Wikipedia categories on information retrieval research : a brief review (2020) 0.01
    0.005683953 = product of:
      0.03978767 = sum of:
        0.0100241685 = weight(_text_:information in 5365) [ClassicSimilarity], result of:
          0.0100241685 = score(doc=5365,freq=8.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.23274569 = fieldWeight in 5365, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5365)
        0.029763501 = weight(_text_:retrieval in 5365) [ClassicSimilarity], result of:
          0.029763501 = score(doc=5365,freq=8.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.40105087 = fieldWeight in 5365, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=5365)
      0.14285715 = coord(2/14)
    
    Abstract
    Wikipedia categories, a classification scheme built for organizing and describing Wikpedia articles, are being applied in computer science research. This paper adopts a systematic literature review approach, in order to identify different approaches and uses of Wikipedia categories in information retrieval research. Several types of work are identified, depending on the intrinsic study of the categories structure, or its use as a tool for the processing and analysis of other documentary corpus different to Wikipedia. Information retrieval is identified as one of the major areas of use, in particular its application in the refinement and improvement of search expressions, and the construction of textual corpus. However, the set of available works shows that in many cases research approaches applied and results obtained can be integrated into a comprehensive and inclusive concept of information retrieval.
  17. Soshnikov, D.: ROMEO: an ontology-based multi-agent architecture for online information retrieval (2021) 0.01
    0.005662316 = product of:
      0.03963621 = sum of:
        0.011574914 = weight(_text_:information in 249) [ClassicSimilarity], result of:
          0.011574914 = score(doc=249,freq=6.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.2687516 = fieldWeight in 249, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=249)
        0.028061297 = weight(_text_:retrieval in 249) [ClassicSimilarity], result of:
          0.028061297 = score(doc=249,freq=4.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.37811437 = fieldWeight in 249, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=249)
      0.14285715 = coord(2/14)
    
    Abstract
    This paper describes an approach to path-finding in the intelligent graphs, with vertices being intelligent agents. A possible implementation of this approach is described, based on logical inference in distributed frame hierarchy. Presented approach can be used for implementing distributed intelligent information systems that include automatic navigation and path generation in hypertext, which can be used, for example in distance education, as well as for organizing intelligent web catalogues with flexible ontology-based information retrieval.
  18. Zhang, L.; Lu, W.; Yang, J.: LAGOS-AND : a large gold standard dataset for scholarly author name disambiguation (2023) 0.01
    0.005556713 = product of:
      0.025931327 = sum of:
        0.013444485 = weight(_text_:system in 883) [ClassicSimilarity], result of:
          0.013444485 = score(doc=883,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.17398985 = fieldWeight in 883, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=883)
        0.004176737 = weight(_text_:information in 883) [ClassicSimilarity], result of:
          0.004176737 = score(doc=883,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.09697737 = fieldWeight in 883, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=883)
        0.008310104 = product of:
          0.016620208 = sum of:
            0.016620208 = weight(_text_:22 in 883) [ClassicSimilarity], result of:
              0.016620208 = score(doc=883,freq=2.0), product of:
                0.085914485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02453417 = queryNorm
                0.19345059 = fieldWeight in 883, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=883)
          0.5 = coord(1/2)
      0.21428572 = coord(3/14)
    
    Abstract
    In this article, we present a method to automatically build large labeled datasets for the author ambiguity problem in the academic world by leveraging the authoritative academic resources, ORCID and DOI. Using the method, we built LAGOS-AND, two large, gold-standard sub-datasets for author name disambiguation (AND), of which LAGOS-AND-BLOCK is created for clustering-based AND research and LAGOS-AND-PAIRWISE is created for classification-based AND research. Our LAGOS-AND datasets are substantially different from the existing ones. The initial versions of the datasets (v1.0, released in February 2021) include 7.5 M citations authored by 798 K unique authors (LAGOS-AND-BLOCK) and close to 1 M instances (LAGOS-AND-PAIRWISE). And both datasets show close similarities to the whole Microsoft Academic Graph (MAG) across validations of six facets. In building the datasets, we reveal the variation degrees of last names in three literature databases, PubMed, MAG, and Semantic Scholar, by comparing author names hosted to the authors' official last names shown on the ORCID pages. Furthermore, we evaluate several baseline disambiguation methods as well as the MAG's author IDs system on our datasets, and the evaluation helps identify several interesting findings. We hope the datasets and findings will bring new insights for future studies. The code and datasets are publicly available.
    Date
    22. 1.2023 18:40:36
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.2, S.168-185
  19. Hjoerland, B.: Information (2023) 0.01
    0.00537402 = product of:
      0.037618138 = sum of:
        0.020256098 = weight(_text_:information in 1118) [ClassicSimilarity], result of:
          0.020256098 = score(doc=1118,freq=24.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.47031528 = fieldWeight in 1118, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1118)
        0.017362041 = weight(_text_:retrieval in 1118) [ClassicSimilarity], result of:
          0.017362041 = score(doc=1118,freq=2.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.23394634 = fieldWeight in 1118, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1118)
      0.14285715 = coord(2/14)
    
    Abstract
    This article presents a brief history of the term "information" and its different meanings, which are both important and difficult because the different meanings of the term imply whole theories of knowledge. The article further considers the relation between "information" and the concepts "matter and energy", "data", "sign and meaning", "knowledge" and "communication". It presents and analyses the influence of information in information studies and knowledge organization and contains a presentation and critical analysis of some compound terms such as "information need", "information overload" and "information retrieval", which illuminate the use of the term information in information studies. An appendix provides a chronological list of definitions of information.
    Theme
    Information
  20. Elsweiler, D.; Kruschwitz, U.: Interaktives Information Retrieval (2023) 0.01
    0.0053588827 = product of:
      0.037512176 = sum of:
        0.009450877 = weight(_text_:information in 797) [ClassicSimilarity], result of:
          0.009450877 = score(doc=797,freq=4.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.21943474 = fieldWeight in 797, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=797)
        0.028061297 = weight(_text_:retrieval in 797) [ClassicSimilarity], result of:
          0.028061297 = score(doc=797,freq=4.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.37811437 = fieldWeight in 797, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=797)
      0.14285715 = coord(2/14)
    
    Abstract
    Interaktives Information Retrieval (IIR) zielt darauf ab, die komplexen Interaktionen zwischen Nutzer*innen und Systemen im IR zu verstehen. Es gibt umfangreiche Literatur zu Themen wie der formalen Modellierung des Suchverhaltens, der Simulation der Interaktion, den interaktiven Funktionen zur Unterstützung des Suchprozesses und der Evaluierung interaktiver Suchsysteme. Dabei ist die interaktive Unterstützung nicht allein auf die Suche beschränkt, sondern hat ebenso die Hilfe bei Navigation und Exploration zum Ziel.

Languages

  • e 671
  • d 124
  • pt 4
  • m 2
  • sp 1
  • More… Less…

Types

  • a 754
  • el 84
  • m 23
  • p 7
  • s 6
  • x 2
  • A 1
  • EL 1
  • More… Less…

Themes

Subjects

Classifications