Search (58 results, page 1 of 3)

  • × type_ss:"a"
  • × type_ss:"el"
  • × year_i:[2020 TO 2030}
  1. Aizawa, A.; Kohlhase, M.: Mathematical information retrieval (2021) 0.04
    0.043812923 = product of:
      0.13143876 = sum of:
        0.018466292 = weight(_text_:information in 667) [ClassicSimilarity], result of:
          0.018466292 = score(doc=667,freq=8.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.27153665 = fieldWeight in 667, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=667)
        0.054829627 = weight(_text_:retrieval in 667) [ClassicSimilarity], result of:
          0.054829627 = score(doc=667,freq=8.0), product of:
            0.1171842 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.038739666 = queryNorm
            0.46789268 = fieldWeight in 667, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=667)
        0.05814285 = weight(_text_:techniques in 667) [ClassicSimilarity], result of:
          0.05814285 = score(doc=667,freq=2.0), product of:
            0.17065717 = queryWeight, product of:
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.038739666 = queryNorm
            0.3406997 = fieldWeight in 667, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.0546875 = fieldNorm(doc=667)
      0.33333334 = coord(3/9)
    
    Abstract
    We present an overview of the NTCIR Math Tasks organized during NTCIR-10, 11, and 12. These tasks are primarily dedicated to techniques for searching mathematical content with formula expressions. In this chapter, we first summarize the task design and introduce test collections generated in the tasks. We also describe the features and main challenges of mathematical information retrieval systems and discuss future perspectives in the field.
    Series
    ¬The Information retrieval series, vol 43
    Source
    Evaluating information retrieval and access tasks. Eds.: Sakai, T., Oard, D., Kando, N. [https://doi.org/10.1007/978-981-15-5554-1_12]
  2. Tramullas, J.: Temas y métodos de investigación en Ciencia de la Información, 2000-2019 : Revisión bibliográfica (2020) 0.02
    0.021174232 = product of:
      0.095284045 = sum of:
        0.01305764 = weight(_text_:information in 5929) [ClassicSimilarity], result of:
          0.01305764 = score(doc=5929,freq=4.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.1920054 = fieldWeight in 5929, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5929)
        0.0822264 = weight(_text_:techniques in 5929) [ClassicSimilarity], result of:
          0.0822264 = score(doc=5929,freq=4.0), product of:
            0.17065717 = queryWeight, product of:
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.038739666 = queryNorm
            0.48182213 = fieldWeight in 5929, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5929)
      0.22222222 = coord(2/9)
    
    Abstract
    A systematic literature review is carried out, detailing the research topics and the methods and techniques used in information science in studies published between 2000 and 2019. The results obtained allow us to affirm that there is no consensus on the core topics of information science, as these evolve and change dynamically in relation to other disciplines, and with the dominant social and cultural contexts. With regard to the research methods and techniques, it can be stated that they have mostly been adopted from social sciences, with the addition of numerical methods, especially in the fields of bibliometric and scientometric research.
  3. Strecker, D.: Dataset Retrieval : Informationsverhalten von Datensuchenden und das Ökosystem von Data-Retrieval-Systemen (2022) 0.02
    0.017913533 = product of:
      0.0806109 = sum of:
        0.010552166 = weight(_text_:information in 4021) [ClassicSimilarity], result of:
          0.010552166 = score(doc=4021,freq=2.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.1551638 = fieldWeight in 4021, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=4021)
        0.07005873 = weight(_text_:retrieval in 4021) [ClassicSimilarity], result of:
          0.07005873 = score(doc=4021,freq=10.0), product of:
            0.1171842 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.038739666 = queryNorm
            0.59785134 = fieldWeight in 4021, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=4021)
      0.22222222 = coord(2/9)
    
    Abstract
    Verschiedene Stakeholder fordern eine bessere Verfügbarkeit von Forschungsdaten. Der Erfolg dieser Initiativen hängt wesentlich von einer guten Auffindbarkeit der publizierten Datensätze ab, weshalb Dataset Retrieval an Bedeutung gewinnt. Dataset Retrieval ist eine Sonderform von Information Retrieval, die sich mit dem Auffinden von Datensätzen befasst. Dieser Beitrag fasst aktuelle Forschungsergebnisse über das Informationsverhalten von Datensuchenden zusammen. Anschließend werden beispielhaft zwei Suchdienste verschiedener Ausrichtung vorgestellt und verglichen. Um darzulegen, wie diese Dienste ineinandergreifen, werden inhaltliche Überschneidungen von Datenbeständen genutzt, um den Metadatenaustausch zu analysieren.
  4. Qi, Q.; Hessen, D.J.; Heijden, P.G.M. van der: Improving information retrieval through correspondenceanalysis instead of latent semantic analysis (2023) 0.01
    0.014376299 = product of:
      0.06469335 = sum of:
        0.017696522 = weight(_text_:information in 1045) [ClassicSimilarity], result of:
          0.017696522 = score(doc=1045,freq=10.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.2602176 = fieldWeight in 1045, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1045)
        0.046996824 = weight(_text_:retrieval in 1045) [ClassicSimilarity], result of:
          0.046996824 = score(doc=1045,freq=8.0), product of:
            0.1171842 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.038739666 = queryNorm
            0.40105087 = fieldWeight in 1045, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=1045)
      0.22222222 = coord(2/9)
    
    Abstract
    The initial dimensions extracted by latent semantic analysis (LSA) of a document-term matrixhave been shown to mainly display marginal effects, which are irrelevant for informationretrieval. To improve the performance of LSA, usually the elements of the raw document-term matrix are weighted and the weighting exponent of singular values can be adjusted.An alternative information retrieval technique that ignores the marginal effects is correspon-dence analysis (CA). In this paper, the information retrieval performance of LSA and CA isempirically compared. Moreover, it is explored whether the two weightings also improve theperformance of CA. The results for four empirical datasets show that CA always performsbetter than LSA. Weighting the elements of the raw data matrix can improve CA; however,it is data dependent and the improvement is small. Adjusting the singular value weightingexponent often improves the performance of CA; however, the extent of the improvementdepends on the dataset and the number of dimensions. (PDF) Improving information retrieval through correspondence analysis instead of latent semantic analysis.
    Source
    Journal of intelligent information systems [https://doi.org/10.1007/s10844-023-00815-y]
  5. Williams, B.: Dimensions & VOSViewer bibliometrics in the reference interview (2020) 0.01
    0.009136267 = product of:
      0.0822264 = sum of:
        0.0822264 = weight(_text_:techniques in 5719) [ClassicSimilarity], result of:
          0.0822264 = score(doc=5719,freq=4.0), product of:
            0.17065717 = queryWeight, product of:
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.038739666 = queryNorm
            0.48182213 = fieldWeight in 5719, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5719)
      0.11111111 = coord(1/9)
    
    Abstract
    The VOSviewer software provides easy access to bibliometric mapping using data from Dimensions, Scopus and Web of Science. The properly formatted and structured citation data, and the ease in which it can be exported open up new avenues for use during citation searches and eference interviews. This paper details specific techniques for using advanced searches in Dimensions, exporting the citation data, and drawing insights from the maps produced in VOS Viewer. These search techniques and data export practices are fast and accurate enough to build into reference interviews for graduate students, faculty, and post-PhD researchers. The search results derived from them are accurate and allow a more comprehensive view of citation networks embedded in ordinary complex boolean searches.
  6. Gladun, A.; Rogushina, J.: Development of domain thesaurus as a set of ontology concepts with use of semantic similarity and elements of combinatorial optimization (2021) 0.01
    0.008993879 = product of:
      0.040472455 = sum of:
        0.01305764 = weight(_text_:information in 572) [ClassicSimilarity], result of:
          0.01305764 = score(doc=572,freq=4.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.1920054 = fieldWeight in 572, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=572)
        0.027414814 = weight(_text_:retrieval in 572) [ClassicSimilarity], result of:
          0.027414814 = score(doc=572,freq=2.0), product of:
            0.1171842 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.038739666 = queryNorm
            0.23394634 = fieldWeight in 572, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=572)
      0.22222222 = coord(2/9)
    
    Abstract
    We consider use of ontological background knowledge in intelligent information systems and analyze directions of their reduction in compliance with specifics of particular user task. Such reduction is aimed at simplification of knowledge processing without loss of significant information. We propose methods of generation of task thesauri based on domain ontology that contain such subset of ontological concepts and relations that can be used in task solving. Combinatorial optimization is used for minimization of task thesaurus. In this approach, semantic similarity estimates are used for determination of concept significance for user task. Some practical examples of optimized thesauri application for semantic retrieval and competence analysis demonstrate efficiency of proposed approach.
  7. Gil-Berrozpe, J.C.: Description, categorization, and representation of hyponymy in environmental terminology (2022) 0.01
    0.008021125 = product of:
      0.03609506 = sum of:
        0.010552166 = weight(_text_:information in 1004) [ClassicSimilarity], result of:
          0.010552166 = score(doc=1004,freq=8.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.1551638 = fieldWeight in 1004, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1004)
        0.025542893 = product of:
          0.051085785 = sum of:
            0.051085785 = weight(_text_:theories in 1004) [ClassicSimilarity], result of:
              0.051085785 = score(doc=1004,freq=2.0), product of:
                0.21161452 = queryWeight, product of:
                  5.4624767 = idf(docFreq=509, maxDocs=44218)
                  0.038739666 = queryNorm
                0.24140964 = fieldWeight in 1004, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4624767 = idf(docFreq=509, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1004)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Terminology has evolved from static and prescriptive theories to dynamic and cognitive approaches. Thanks to these approaches, there have been significant advances in the design and elaboration of terminological resources. This has resulted in the creation of tools such as terminological knowledge bases, which are able to show how concepts are interrelated through different semantic or conceptual relations. Of these relations, hyponymy is the most relevant to terminology work because it deals with concept categorization and term hierarchies. This doctoral thesis presents an enhancement of the semantic structure of EcoLexicon, a terminological knowledge base on environmental science. The aim of this research was to improve the description, categorization, and representation of hyponymy in environmental terminology. Therefore, we created HypoLexicon, a new stand-alone module for EcoLexicon in the form of a hyponymy-based terminological resource. This resource contains twelve terminological entries from four specialized domains (Biology, Chemistry, Civil Engineering, and Geology), which consist of 309 concepts and 465 terms associated with those concepts. This research was mainly based on the theoretical premises of Frame-based Terminology. This theory was combined with Cognitive Linguistics, for conceptual description and representation; Corpus Linguistics, for the extraction and processing of linguistic and terminological information; and Ontology, related to hyponymy and relevant for concept categorization. HypoLexicon was constructed from the following materials: (i) the EcoLexicon English Corpus; (ii) other specialized terminological resources, including EcoLexicon; (iii) Sketch Engine; and (iv) Lexonomy. This thesis explains the methodologies applied for corpus extraction and compilation, corpus analysis, the creation of conceptual hierarchies, and the design of the terminological template. The results of the creation of HypoLexicon are discussed by highlighting the information in the hyponymy-based terminological entries: (i) parent concept (hypernym); (ii) child concepts (hyponyms, with various hyponymy levels); (iii) terminological definitions; (iv) conceptual categories; (v) hyponymy subtypes; and (vi) hyponymic contexts. Furthermore, the features and the navigation within HypoLexicon are described from the user interface and the admin interface. In conclusion, this doctoral thesis lays the groundwork for developing a terminological resource that includes definitional, relational, ontological and contextual information about specialized hypernyms and hyponyms. All of this information on specialized knowledge is simple to follow thanks to the hierarchical structure of the terminological template used in HypoLexicon. Therefore, not only does it enhance knowledge representation, but it also facilitates its acquisition.
  8. Jörs, B.: ¬Ein kleines Fach zwischen "Daten" und "Wissen" II : Anmerkungen zum (virtuellen) "16th International Symposium of Information Science" (ISI 2021", Regensburg) (2021) 0.01
    0.005847096 = product of:
      0.026311932 = sum of:
        0.013190207 = weight(_text_:information in 330) [ClassicSimilarity], result of:
          0.013190207 = score(doc=330,freq=8.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.19395474 = fieldWeight in 330, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=330)
        0.013121725 = product of:
          0.02624345 = sum of:
            0.02624345 = weight(_text_:22 in 330) [ClassicSimilarity], result of:
              0.02624345 = score(doc=330,freq=2.0), product of:
                0.13565971 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038739666 = queryNorm
                0.19345059 = fieldWeight in 330, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=330)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Nur noch Informationsethik, Informationskompetenz und Information Assessment? Doch gerade die Abschottung von anderen Disziplinen verstärkt die Isolation des "kleinen Faches" Informationswissenschaft in der Scientific Community. So bleiben ihr als letzte "eigenständige" Forschungsrandgebiete nur die, die Wolf Rauch als Keynote Speaker bereits in seinem einführenden, historisch-genetischen Vortrag zur Lage der Informationswissenschaft auf der ISI 2021 benannt hat: "Wenn die universitäre Informationswissenschaft (zumindest in Europa) wohl kaum eine Chance hat, im Bereich der Entwicklung von Systemen und Anwendungen wieder an die Spitze der Entwicklung vorzustoßen, bleiben ihr doch Gebiete, in denen ihr Beitrag in der kommenden Entwicklungsphase dringend erforderlich sein wird: Informationsethik, Informationskompetenz, Information Assessment" (Wolf Rauch: Was aus der Informationswissenschaft geworden ist; in: Thomas Schmidt; Christian Wolff (Eds): Information between Data and Knowledge. Schriften zur Informationswissenschaft 74, Regensburg, 2021, Seiten 20-22 - siehe auch die Rezeption des Beitrages von Rauch durch Johannes Elia Panskus, Was aus der Informationswissenschaft geworden ist. Sie ist in der Realität angekommen, in: Open Password, 17. März 2021). Das ist alles? Ernüchternd.
  9. Jaeger, L.: Wissenschaftler versus Wissenschaft (2020) 0.00
    0.0034991268 = product of:
      0.03149214 = sum of:
        0.03149214 = product of:
          0.06298428 = sum of:
            0.06298428 = weight(_text_:22 in 4156) [ClassicSimilarity], result of:
              0.06298428 = score(doc=4156,freq=2.0), product of:
                0.13565971 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038739666 = queryNorm
                0.46428138 = fieldWeight in 4156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4156)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Date
    2. 3.2020 14:08:22
  10. Wagner, E.: Über Impfstoffe zur digitalen Identität? (2020) 0.00
    0.0029159389 = product of:
      0.02624345 = sum of:
        0.02624345 = product of:
          0.0524869 = sum of:
            0.0524869 = weight(_text_:22 in 5846) [ClassicSimilarity], result of:
              0.0524869 = score(doc=5846,freq=2.0), product of:
                0.13565971 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038739666 = queryNorm
                0.38690117 = fieldWeight in 5846, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5846)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Date
    4. 5.2020 17:22:40
  11. Engel, B.: Corona-Gesundheitszertifikat als Exitstrategie (2020) 0.00
    0.0029159389 = product of:
      0.02624345 = sum of:
        0.02624345 = product of:
          0.0524869 = sum of:
            0.0524869 = weight(_text_:22 in 5906) [ClassicSimilarity], result of:
              0.0524869 = score(doc=5906,freq=2.0), product of:
                0.13565971 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038739666 = queryNorm
                0.38690117 = fieldWeight in 5906, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5906)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Date
    4. 5.2020 17:22:28
  12. Arndt, O.: Totale Telematik (2020) 0.00
    0.0029159389 = product of:
      0.02624345 = sum of:
        0.02624345 = product of:
          0.0524869 = sum of:
            0.0524869 = weight(_text_:22 in 5907) [ClassicSimilarity], result of:
              0.0524869 = score(doc=5907,freq=2.0), product of:
                0.13565971 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038739666 = queryNorm
                0.38690117 = fieldWeight in 5907, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5907)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Date
    22. 6.2020 19:11:24
  13. Arndt, O.: Erosion der bürgerlichen Freiheiten (2020) 0.00
    0.0029159389 = product of:
      0.02624345 = sum of:
        0.02624345 = product of:
          0.0524869 = sum of:
            0.0524869 = weight(_text_:22 in 82) [ClassicSimilarity], result of:
              0.0524869 = score(doc=82,freq=2.0), product of:
                0.13565971 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038739666 = queryNorm
                0.38690117 = fieldWeight in 82, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=82)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Date
    22. 6.2020 19:16:24
  14. Baecker, D.: ¬Der Frosch, die Fliege und der Mensch : zum Tod von Humberto Maturana (2021) 0.00
    0.0029159389 = product of:
      0.02624345 = sum of:
        0.02624345 = product of:
          0.0524869 = sum of:
            0.0524869 = weight(_text_:22 in 236) [ClassicSimilarity], result of:
              0.0524869 = score(doc=236,freq=2.0), product of:
                0.13565971 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038739666 = queryNorm
                0.38690117 = fieldWeight in 236, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=236)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Date
    7. 5.2021 22:10:24
  15. Eyert, F.: Mathematische Wissenschaftskommunikation in der digitalen Gesellschaft (2023) 0.00
    0.0029159389 = product of:
      0.02624345 = sum of:
        0.02624345 = product of:
          0.0524869 = sum of:
            0.0524869 = weight(_text_:22 in 1001) [ClassicSimilarity], result of:
              0.0524869 = score(doc=1001,freq=2.0), product of:
                0.13565971 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038739666 = queryNorm
                0.38690117 = fieldWeight in 1001, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1001)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Source
    Mitteilungen der Deutschen Mathematiker-Vereinigung. 2023, H.1, S.22-25
  16. Petras, V.: ¬The identity of information science (2023) 0.00
    0.0027418465 = product of:
      0.02467662 = sum of:
        0.02467662 = weight(_text_:information in 1077) [ClassicSimilarity], result of:
          0.02467662 = score(doc=1077,freq=28.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.3628561 = fieldWeight in 1077, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1077)
      0.11111111 = coord(1/9)
    
    Abstract
    Purpose This paper offers a definition of the core of information science, which encompasses most research in the field. The definition provides a unique identity for information science and positions it in the disciplinary universe. Design/methodology/approach After motivating the objective, a definition of the core and an explanation of its key aspects are provided. The definition is related to other definitions of information science before controversial discourse aspects are briefly addressed: discipline vs. field, science vs. humanities, library vs. information science and application vs. theory. Interdisciplinarity as an often-assumed foundation of information science is challenged. Findings Information science is concerned with how information is manifested across space and time. Information is manifested to facilitate and support the representation, access, documentation and preservation of ideas, activities, or practices, and to enable different types of interactions. Research and professional practice encompass the infrastructures - institutions and technology -and phenomena and practices around manifested information across space and time as its core contribution to the scholarly landscape. Information science collaborates with other disciplines to work on complex information problems that need multi- and interdisciplinary approaches to address them. Originality/value The paper argues that new information problems may change the core of the field, but throughout its existence, the discipline has remained quite stable in its central focus, yet proved to be highly adaptive to the tremendous changes in the forms, practices, institutions and technologies around and for manifested information.
  17. DeSilva, J.M.; Traniello, J.F.A.; Claxton, A.G.; Fannin, L.D.: When and why did human brains decrease in size? : a new change-point analysis and insights from brain evolution in ants (2021) 0.00
    0.0026289106 = product of:
      0.011830097 = sum of:
        0.0039570625 = weight(_text_:information in 405) [ClassicSimilarity], result of:
          0.0039570625 = score(doc=405,freq=2.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.058186423 = fieldWeight in 405, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=405)
        0.007873035 = product of:
          0.01574607 = sum of:
            0.01574607 = weight(_text_:22 in 405) [ClassicSimilarity], result of:
              0.01574607 = score(doc=405,freq=2.0), product of:
                0.13565971 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038739666 = queryNorm
                0.116070345 = fieldWeight in 405, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=405)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Human brain size nearly quadrupled in the six million years since Homo last shared a common ancestor with chimpanzees, but human brains are thought to have decreased in volume since the end of the last Ice Age. The timing and reason for this decrease is enigmatic. Here we use change-point analysis to estimate the timing of changes in the rate of hominin brain evolution. We find that hominin brains experienced positive rate changes at 2.1 and 1.5 million years ago, coincident with the early evolution of Homo and technological innovations evident in the archeological record. But we also find that human brain size reduction was surprisingly recent, occurring in the last 3,000 years. Our dating does not support hypotheses concerning brain size reduction as a by-product of body size reduction, a result of a shift to an agricultural diet, or a consequence of self-domestication. We suggest our analysis supports the hypothesis that the recent decrease in brain size may instead result from the externalization of knowledge and advantages of group-level decision-making due in part to the advent of social systems of distributed cognition and the storage and sharing of information. Humans live in social groups in which multiple brains contribute to the emergence of collective intelligence. Although difficult to study in the deep history of Homo, the impacts of group size, social organization, collective intelligence and other potential selective forces on brain evolution can be elucidated using ants as models. The remarkable ecological diversity of ants and their species richness encompasses forms convergent in aspects of human sociality, including large group size, agrarian life histories, division of labor, and collective cognition. Ants provide a wide range of social systems to generate and test hypotheses concerning brain size enlargement or reduction and aid in interpreting patterns of brain evolution identified in humans. Although humans and ants represent very different routes in social and cognitive evolution, the insights ants offer can broadly inform us of the selective forces that influence brain size.
    Source
    Frontiers in ecology and evolution, 22 October 2021 [https://www.frontiersin.org/articles/10.3389/fevo.2021.742639/full]
  18. Hasubick, J.; Wiesenmüller, H.: RVK-Registerbegriffe in der Katalogrecherche : Chancen und Grenzen (2022) 0.00
    0.0026109347 = product of:
      0.023498412 = sum of:
        0.023498412 = weight(_text_:retrieval in 538) [ClassicSimilarity], result of:
          0.023498412 = score(doc=538,freq=2.0), product of:
            0.1171842 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.038739666 = queryNorm
            0.20052543 = fieldWeight in 538, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=538)
      0.11111111 = coord(1/9)
    
    Theme
    Klassifikationssysteme im Online-Retrieval
  19. Almeida, P. de; Gnoli, C.: Fiction in a phenomenon-based classification (2021) 0.00
    0.0026109347 = product of:
      0.023498412 = sum of:
        0.023498412 = weight(_text_:retrieval in 712) [ClassicSimilarity], result of:
          0.023498412 = score(doc=712,freq=2.0), product of:
            0.1171842 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.038739666 = queryNorm
            0.20052543 = fieldWeight in 712, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=712)
      0.11111111 = coord(1/9)
    
    Abstract
    In traditional classification, fictional works are indexed only by their form, genre, and language, while their subject content is believed to be irrelevant. However, recent research suggests that this may not be the best approach. We tested indexing of a small sample of selected fictional works by Integrative Levels Classification (ILC2), a freely faceted system based on phenomena instead of disciplines and considered the structure of the resulting classmarks. Issues in the process of subject analysis, such as selection of relevant vs. non-relevant themes and citation order of relevant ones, are identified and discussed. Some phenomena that are covered in scholarly literature can also be identified as relevant themes in fictional literature and expressed in classmarks. This can allow for hybrid search and retrieval systems covering both fiction and nonfiction, which will result in better leveraging of the knowledge contained in fictional works.
  20. Hauff-Hartig, S.: Automatische Transkription von Videos : Fernsehen 3.0: Automatisierte Sentimentanalyse und Zusammenstellung von Kurzvideos mit hohem Aufregungslevel KI-generierte Metadaten: Von der Technologiebeobachtung bis zum produktiven Einsatz (2021) 0.00
    0.0023327512 = product of:
      0.02099476 = sum of:
        0.02099476 = product of:
          0.04198952 = sum of:
            0.04198952 = weight(_text_:22 in 251) [ClassicSimilarity], result of:
              0.04198952 = score(doc=251,freq=2.0), product of:
                0.13565971 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038739666 = queryNorm
                0.30952093 = fieldWeight in 251, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=251)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Date
    22. 5.2021 12:43:05

Languages

  • d 39
  • e 18
  • sp 1
  • More… Less…