Search (114 results, page 1 of 6)

  • × year_i:[2020 TO 2030}
  1. Belabbes, M.A.; Ruthven, I.; Moshfeghi, Y.; Rasmussen Pennington, D.: Information overload : a concept analysis (2023) 0.06
    0.062729806 = product of:
      0.12545961 = sum of:
        0.12545961 = sum of:
          0.09117339 = weight(_text_:abstracts in 950) [ClassicSimilarity], result of:
            0.09117339 = score(doc=950,freq=2.0), product of:
              0.2890173 = queryWeight, product of:
                5.7104354 = idf(docFreq=397, maxDocs=44218)
                0.05061213 = queryNorm
              0.31545997 = fieldWeight in 950, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.7104354 = idf(docFreq=397, maxDocs=44218)
                0.0390625 = fieldNorm(doc=950)
          0.034286223 = weight(_text_:22 in 950) [ClassicSimilarity], result of:
            0.034286223 = score(doc=950,freq=2.0), product of:
              0.17723505 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05061213 = queryNorm
              0.19345059 = fieldWeight in 950, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=950)
      0.5 = coord(1/2)
    
    Abstract
    Purpose With the shift to an information-based society and to the de-centralisation of information, information overload has attracted a growing interest in the computer and information science research communities. However, there is no clear understanding of the meaning of the term, and while there have been many proposed definitions, there is no consensus. The goal of this work was to define the concept of "information overload". In order to do so, a concept analysis using Rodgers' approach was performed. Design/methodology/approach A concept analysis using Rodgers' approach based on a corpus of documents published between 2010 and September 2020 was conducted. One surrogate for "information overload", which is "cognitive overload" was identified. The corpus of documents consisted of 151 documents for information overload and ten for cognitive overload. All documents were from the fields of computer science and information science, and were retrieved from three databases: Association for Computing Machinery (ACM) Digital Library, SCOPUS and Library and Information Science Abstracts (LISA). Findings The themes identified from the authors' concept analysis allowed us to extract the triggers, manifestations and consequences of information overload. They found triggers related to information characteristics, information need, the working environment, the cognitive abilities of individuals and the information environment. In terms of manifestations, they found that information overload manifests itself both emotionally and cognitively. The consequences of information overload were both internal and external. These findings allowed them to provide a definition of information overload. Originality/value Through the authors' concept analysis, they were able to clarify the components of information overload and provide a definition of the concept.
    Date
    22. 4.2023 19:27:56
  2. Li, K.; Jiao, C.: ¬The data paper as a sociolinguistic epistemic object : a content analysis on the rhetorical moves used in data paper abstracts (2022) 0.05
    0.045586694 = product of:
      0.09117339 = sum of:
        0.09117339 = product of:
          0.18234678 = sum of:
            0.18234678 = weight(_text_:abstracts in 560) [ClassicSimilarity], result of:
              0.18234678 = score(doc=560,freq=8.0), product of:
                0.2890173 = queryWeight, product of:
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.05061213 = queryNorm
                0.63091993 = fieldWeight in 560, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=560)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The data paper is an emerging academic genre that focuses on the description of research data objects. However, there is a lack of empirical knowledge about this rising genre in quantitative science studies, particularly from the perspective of its linguistic features. To fill this gap, this research aims to offer a first quantitative examination of which rhetorical moves-rhetorical units performing a coherent narrative function-are used in data paper abstracts, as well as how these moves are used. To this end, we developed a new classification scheme for rhetorical moves in data paper abstracts by expanding a well-received system that focuses on English-language research article abstracts. We used this expanded scheme to classify and analyze rhetorical moves used in two flagship data journals, Scientific Data and Data in Brief. We found that data papers exhibit a combination of introduction, method, results, and discussion- and data-oriented moves and that the usage differences between the journals can be largely explained by journal policies concerning abstract and paper structure. This research offers a novel examination of how the data paper, a data-oriented knowledge representation, is composed, which greatly contributes to a deeper understanding of research data and its publication in the scholarly communication system.
  3. Bu, Y.; Li, M.; Gu, W.; Huang, W.-b.: Topic diversity : a discipline scheme-free diversity measurement for journals (2021) 0.05
    0.045128524 = product of:
      0.09025705 = sum of:
        0.09025705 = product of:
          0.1805141 = sum of:
            0.1805141 = weight(_text_:abstracts in 209) [ClassicSimilarity], result of:
              0.1805141 = score(doc=209,freq=4.0), product of:
                0.2890173 = queryWeight, product of:
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.05061213 = queryNorm
                0.6245789 = fieldWeight in 209, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=209)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Scientometrics has many citation-based measurements for characterizing diversity, but most of these measurements depend on human-designed categories and the granularity of discipline classifications sometimes does not allow in-depth analysis. As such, the current paper proposes a new measurement for quantifying journals' diversity by utilizing the abstracts of scientific publications in journals, namely topic diversity (TD). Specifically, we apply a topic detection method to extract fine-grained topics, rather than disciplines, in journals and adapt certain diversity indicators to calculate TD. Since TD only needs as inputs abstracts of publications rather than citing relationships between publications, this measurement has the potential to be widely used in scientometrics.
  4. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.04
    0.04019274 = product of:
      0.08038548 = sum of:
        0.08038548 = product of:
          0.24115643 = sum of:
            0.24115643 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.24115643 = score(doc=862,freq=2.0), product of:
                0.4290902 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05061213 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  5. Hahn, U.: Abstracting - Textzusammenfassung (2023) 0.04
    0.03868159 = product of:
      0.07736318 = sum of:
        0.07736318 = product of:
          0.15472636 = sum of:
            0.15472636 = weight(_text_:abstracts in 786) [ClassicSimilarity], result of:
              0.15472636 = score(doc=786,freq=4.0), product of:
                0.2890173 = queryWeight, product of:
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.05061213 = queryNorm
                0.5353533 = fieldWeight in 786, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.046875 = fieldNorm(doc=786)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Abstracts (hier als Sammelbegriff für jegliche Form meist schriftlicher Zusammenfassungen verstanden) beruhen auf der inhaltlichen Informationsverdichtung (Kondensierung) von längeren Quelltexten (etwa Zeitungs- oder Zeitschriftenartikel). Geht man von der Menge aller Aussagen eines Originaldokumentes (auch als Volltext bezeichnet) aus, soll ein Abstract - abhängig vom gewünschten Verdichtungsgrad - nur die wichtigsten Aussagen des Volltextes oder Generalisierungen davon in redundanzfreier, grammatikalisch korrekter, textuell kohärenter und gut lesbarer Form enthalten. Durch die Verdichtung - so die zentrale informationswissenschaftliche Annahme - gewinnt der Nutzer Übersicht über die in einem (oder mehreren) Dokument(en) behandelten Themen mit wesentlich geringerem (Zeit-)Aufwand als durch die Lektüre des Originaltextes. Es ist dabei unstrittig, dass jede Form der Kondensierung zu Informationsverlusten führt. Folglich gilt es, ein Optimum an Zeitgewinn und Informationsverlust anzustreben. Abstracting, der Prozess der Erstellung von Abstracts, selbst kann als Spezialisierung der allgemeineren Textzusammenfassung (TZF) aufgefasst werden, fokussiert jedoch primär auf Fach- und Sachtexte.
  6. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.03
    0.03349395 = product of:
      0.0669879 = sum of:
        0.0669879 = product of:
          0.2009637 = sum of:
            0.2009637 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.2009637 = score(doc=5669,freq=2.0), product of:
                0.4290902 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05061213 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  7. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.03
    0.03349395 = product of:
      0.0669879 = sum of:
        0.0669879 = product of:
          0.2009637 = sum of:
            0.2009637 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.2009637 = score(doc=1000,freq=2.0), product of:
                0.4290902 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05061213 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  8. Collard, J.; Paiva, V. de; Fong, B.; Subrahmanian, E.: Extracting mathematical concepts from text (2022) 0.03
    0.031910684 = product of:
      0.06382137 = sum of:
        0.06382137 = product of:
          0.12764274 = sum of:
            0.12764274 = weight(_text_:abstracts in 668) [ClassicSimilarity], result of:
              0.12764274 = score(doc=668,freq=2.0), product of:
                0.2890173 = queryWeight, product of:
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.05061213 = queryNorm
                0.44164395 = fieldWeight in 668, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=668)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We investigate different systems for extracting mathematical entities from English texts in the mathematical field of category theory as a first step for constructing a mathematical knowledge graph. We consider four different term extractors and compare their results. This small experiment showcases some of the issues with the construction and evaluation of terms extracted from noisy domain text. We also make available two open corpora in research mathematics, in particular in category theory: a small corpus of 755 abstracts from the journal TAC (3188 sentences), and a larger corpus from the nLab community wiki (15,000 sentences).
  9. Lowe, D.B.; Dollinger, I.; Koster, T.; Herbert, B.E.: Text mining for type of research classification (2021) 0.03
    0.027352015 = product of:
      0.05470403 = sum of:
        0.05470403 = product of:
          0.10940806 = sum of:
            0.10940806 = weight(_text_:abstracts in 720) [ClassicSimilarity], result of:
              0.10940806 = score(doc=720,freq=2.0), product of:
                0.2890173 = queryWeight, product of:
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.05061213 = queryNorm
                0.37855196 = fieldWeight in 720, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.046875 = fieldNorm(doc=720)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This project brought together undergraduate students in Computer Science with librarians to mine abstracts of articles from the Texas A&M University Libraries' institutional repository, OAKTrust, in order to probe the creation of new metadata to improve discovery and use. The mining operation task consisted simply of classifying the articles into two categories of research type: basic research ("for understanding," "curiosity-based," or "knowledge-based") and applied research ("use-based"). These categories are fundamental especially for funders but are also important to researchers. The mining-to-classification steps took several iterations, but ultimately, we achieved good results with the toolkit BERT (Bidirectional Encoder Representations from Transformers). The project and its workflows represent a preview of what may lie ahead in the future of crafting metadata using text mining techniques to enhance discoverability.
  10. ¬Der Student aus dem Computer (2023) 0.02
    0.024000356 = product of:
      0.048000712 = sum of:
        0.048000712 = product of:
          0.096001424 = sum of:
            0.096001424 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.096001424 = score(doc=1079,freq=2.0), product of:
                0.17723505 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05061213 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 1.2023 16:22:55
  11. Thelwall, M.; Sud, P.: Do new research issues attract more citations? : a comparison between 25 Scopus subject categories (2021) 0.02
    0.022793347 = product of:
      0.045586694 = sum of:
        0.045586694 = product of:
          0.09117339 = sum of:
            0.09117339 = weight(_text_:abstracts in 157) [ClassicSimilarity], result of:
              0.09117339 = score(doc=157,freq=2.0), product of:
                0.2890173 = queryWeight, product of:
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.05061213 = queryNorm
                0.31545997 = fieldWeight in 157, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=157)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Finding new ways to help researchers and administrators understand academic fields is an important task for information scientists. Given the importance of interdisciplinary research, it is essential to be aware of disciplinary differences in aspects of scholarship, such as the significance of recent changes in a field. This paper identifies potential changes in 25 subject categories through a term comparison of words in article titles, keywords and abstracts in 1 year compared to the previous 4 years. The scholarly influence of new research issues is indirectly assessed with a citation analysis of articles matching each trending term. While topic-related words dominate the top terms, style, national focus, and language changes are also evident. Thus, as reflected in Scopus, fields evolve along multiple dimensions. Moreover, while articles exploiting new issues are usually more cited in some fields, such as Organic Chemistry, they are usually less cited in others, including History. The possible causes of new issues being less cited include externally driven temporary factors, such as disease outbreaks, and internally driven temporary decisions, such as a deliberate emphasis on a single topic (e.g., through a journal special issue).
  12. Abdo, A.H.; Cointet, J.-P.; Bourret, P.; Cambrosio, A,: Domain-topic models with chained dimensions : charting an emergent domain of a major oncology conference (2022) 0.02
    0.022793347 = product of:
      0.045586694 = sum of:
        0.045586694 = product of:
          0.09117339 = sum of:
            0.09117339 = weight(_text_:abstracts in 619) [ClassicSimilarity], result of:
              0.09117339 = score(doc=619,freq=2.0), product of:
                0.2890173 = queryWeight, product of:
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.05061213 = queryNorm
                0.31545997 = fieldWeight in 619, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=619)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper presents a contribution to the study of bibliographic corpora through science mapping. From a graph representation of documents and their textual dimension, stochastic block models can provide a simultaneous clustering of documents and words that we call a domain-topic model. Previous work investigated the resulting topics, or word clusters, while ours focuses on the study of the document clusters we call domains. To enable the description and interactive navigation of domains, we introduce measures and interfaces that consider the structure of the model to relate both types of clusters. We then present a procedure that extends the block model to cluster metadata attributes of documents, which we call a domain-chained model, noting that our measures and interfaces transpose to metadata clusters. We provide an example application to a corpus relevant to current science, technology and society (STS) research and an interesting case for our approach: the abstracts presented between 1995 and 2017 at the American Society of Clinical Oncology Annual Meeting, the major oncology research conference. Through a sequence of domain-topic and domain-chained models, we identify and describe a group of domains that have notably grown through the last decades and which we relate to the establishment of "oncopolicy" as a major concern in oncology.
  13. Pech, G.; Delgado, C.; Sorella, S.P.: Classifying papers into subfields using Abstracts, Titles, Keywords and KeyWords Plus through pattern detection and optimization procedures : an application in Physics (2022) 0.02
    0.022793347 = product of:
      0.045586694 = sum of:
        0.045586694 = product of:
          0.09117339 = sum of:
            0.09117339 = weight(_text_:abstracts in 744) [ClassicSimilarity], result of:
              0.09117339 = score(doc=744,freq=2.0), product of:
                0.2890173 = queryWeight, product of:
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.05061213 = queryNorm
                0.31545997 = fieldWeight in 744, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=744)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  14. Bragato Barros, T.H.: Michel Pêcheux's discourse analysis : an approach to domain analyses (2023) 0.02
    0.022793347 = product of:
      0.045586694 = sum of:
        0.045586694 = product of:
          0.09117339 = sum of:
            0.09117339 = weight(_text_:abstracts in 1116) [ClassicSimilarity], result of:
              0.09117339 = score(doc=1116,freq=2.0), product of:
                0.2890173 = queryWeight, product of:
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.05061213 = queryNorm
                0.31545997 = fieldWeight in 1116, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1116)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article discusses the aspects and points of contact between discourse analysis and knowledge organization, perceiving how Michel Pêcheux's discourse analyses can contribute to domain analyses. Discourse analysis (DA) deals with the theoretical-methodological development of social and scientific movements that took place in France from the 1960s onwards; this paper seeks to discuss aspects of discourse analysis and the possibilities of its use in the universe of knowledge organization (KO). Little work is done structurally and transversally when it comes to discourse itself, especially when the words "discourse" and "analysis" appear in the titles, abstracts, keywords etc. of chapters, books and journals that have KO in their scope. That is mainly due to those works are recent and that belong to fields far from those which have traditionally dealt with discourse. Therefore, viewing discourse as a theoretical contribution to KO means a new framework should be understood in the scope of the analyses carried out regarding the construction of systems, approaches, and studies, precisely because it sees in the terms not only what concerns their concepts, as is the traditional route in KO, but also the ideology, and understands the construction of meaning as something historical as well as social. So, there is a major contribution for domain analyses based in Pêcheux's discourse theory.
  15. Zakaria, M.S.: Measuring typographical errors in online catalogs of academic libraries using Ballard's list : a case study from Egypt (2023) 0.02
    0.022793347 = product of:
      0.045586694 = sum of:
        0.045586694 = product of:
          0.09117339 = sum of:
            0.09117339 = weight(_text_:abstracts in 1184) [ClassicSimilarity], result of:
              0.09117339 = score(doc=1184,freq=2.0), product of:
                0.2890173 = queryWeight, product of:
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.05061213 = queryNorm
                0.31545997 = fieldWeight in 1184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1184)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Typographical errors in bibliographic records of online library catalogs are a common troublesome phenomenon, spread all over the world. They can affect the retrieval and identification of items in information retrieval systems and thus prevent users from finding the documents they need. The present study was conducted to measure typographical errors in the online catalog of the Egyptian Universities Libraries Consortium (EULC). The investigation depended on Terry Ballard's typographical error terms list. The EULC catalog was searched to identify matched erroneous records. The study found that the total number of erroneous records reached 1686, whereas the mean error rate for each record is 11.24, which is very high. About 396 erroneous records (23.49%) have been retrieved from Section C of Ballard's list (Moderate Probability). The typographical errors found within the abstracts of the study's sample records represented 35.82%. Omissions were the first common type of errors with 54.51%, followed by transpositions at 17.08%. Regarding the analysis of parts of speech, the study found that 63.46% of errors occur in noun terms. The results of the study indicated that typographical errors still pose a serious challenge for information retrieval systems, especially for library systems in the Arab environment. The study proposes some solutions for Egyptian university libraries in order to avoid typographic mistakes in the future.
  16. Jaeger, L.: Wissenschaftler versus Wissenschaft (2020) 0.02
    0.020571733 = product of:
      0.041143466 = sum of:
        0.041143466 = product of:
          0.08228693 = sum of:
            0.08228693 = weight(_text_:22 in 4156) [ClassicSimilarity], result of:
              0.08228693 = score(doc=4156,freq=2.0), product of:
                0.17723505 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05061213 = queryNorm
                0.46428138 = fieldWeight in 4156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4156)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    2. 3.2020 14:08:22
  17. Ibrahim, G.M.; Taylor, M.: Krebszellen manipulieren Neurone : Gliome (2023) 0.02
    0.020571733 = product of:
      0.041143466 = sum of:
        0.041143466 = product of:
          0.08228693 = sum of:
            0.08228693 = weight(_text_:22 in 1203) [ClassicSimilarity], result of:
              0.08228693 = score(doc=1203,freq=2.0), product of:
                0.17723505 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05061213 = queryNorm
                0.46428138 = fieldWeight in 1203, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1203)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Spektrum der Wissenschaft. 2023, H.10, S.22-24
  18. Aydin, Ö.; Karaarslan, E.: OpenAI ChatGPT generated literature review: : digital twin in healthcare (2022) 0.02
    0.018234678 = product of:
      0.036469355 = sum of:
        0.036469355 = product of:
          0.07293871 = sum of:
            0.07293871 = weight(_text_:abstracts in 851) [ClassicSimilarity], result of:
              0.07293871 = score(doc=851,freq=2.0), product of:
                0.2890173 = queryWeight, product of:
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.05061213 = queryNorm
                0.25236797 = fieldWeight in 851, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.03125 = fieldNorm(doc=851)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Literature review articles are essential to summarize the related work in the selected field. However, covering all related studies takes too much time and effort. This study questions how Artificial Intelligence can be used in this process. We used ChatGPT to create a literature review article to show the stage of the OpenAI ChatGPT artificial intelligence application. As the subject, the applications of Digital Twin in the health field were chosen. Abstracts of the last three years (2020, 2021 and 2022) papers were obtained from the keyword "Digital twin in healthcare" search results on Google Scholar and paraphrased by ChatGPT. Later on, we asked ChatGPT questions. The results are promising; however, the paraphrased parts had significant matches when checked with the Ithenticate tool. This article is the first attempt to show the compilation and expression of knowledge will be accelerated with the help of artificial intelligence. We are still at the beginning of such advances. The future academic publishing process will require less human effort, which in turn will allow academics to focus on their studies. In future studies, we will monitor citations to this study to evaluate the academic validity of the content produced by the ChatGPT. 1. Introduction OpenAI ChatGPT (ChatGPT, 2022) is a chatbot based on the OpenAI GPT-3 language model. It is designed to generate human-like text responses to user input in a conversational context. OpenAI ChatGPT is trained on a large dataset of human conversations and can be used to create responses to a wide range of topics and prompts. The chatbot can be used for customer service, content creation, and language translation tasks, creating replies in multiple languages. OpenAI ChatGPT is available through the OpenAI API, which allows developers to access and integrate the chatbot into their applications and systems. OpenAI ChatGPT is a variant of the GPT (Generative Pre-trained Transformer) language model developed by OpenAI. It is designed to generate human-like text, allowing it to engage in conversation with users naturally and intuitively. OpenAI ChatGPT is trained on a large dataset of human conversations, allowing it to understand and respond to a wide range of topics and contexts. It can be used in various applications, such as chatbots, customer service agents, and language translation systems. OpenAI ChatGPT is a state-of-the-art language model able to generate coherent and natural text that can be indistinguishable from text written by a human. As an artificial intelligence, ChatGPT may need help to change academic writing practices. However, it can provide information and guidance on ways to improve people's academic writing skills.
  19. Koch, C.: Was ist Bewusstsein? (2020) 0.02
    0.017143112 = product of:
      0.034286223 = sum of:
        0.034286223 = product of:
          0.06857245 = sum of:
            0.06857245 = weight(_text_:22 in 5723) [ClassicSimilarity], result of:
              0.06857245 = score(doc=5723,freq=2.0), product of:
                0.17723505 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05061213 = queryNorm
                0.38690117 = fieldWeight in 5723, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5723)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    17. 1.2020 22:15:11
  20. Wagner, E.: Über Impfstoffe zur digitalen Identität? (2020) 0.02
    0.017143112 = product of:
      0.034286223 = sum of:
        0.034286223 = product of:
          0.06857245 = sum of:
            0.06857245 = weight(_text_:22 in 5846) [ClassicSimilarity], result of:
              0.06857245 = score(doc=5846,freq=2.0), product of:
                0.17723505 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05061213 = queryNorm
                0.38690117 = fieldWeight in 5846, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5846)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    4. 5.2020 17:22:40

Languages

  • e 84
  • d 30

Types

  • a 107
  • el 21
  • p 3
  • m 2
  • x 1
  • More… Less…