Search (112 results, page 1 of 6)

  • × year_i:[2020 TO 2030}
  1. Petrovich, E.: Science mapping and science maps (2021) 0.06
    0.06124655 = product of:
      0.1224931 = sum of:
        0.1224931 = product of:
          0.2449862 = sum of:
            0.2449862 = weight(_text_:maps in 595) [ClassicSimilarity], result of:
              0.2449862 = score(doc=595,freq=24.0), product of:
                0.28477904 = queryWeight, product of:
                  5.619245 = idf(docFreq=435, maxDocs=44218)
                  0.050679237 = queryNorm
                0.8602677 = fieldWeight in 595, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  5.619245 = idf(docFreq=435, maxDocs=44218)
                  0.03125 = fieldNorm(doc=595)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Science maps are visual representations of the structure and dynamics of scholarly knowl­edge. They aim to show how fields, disciplines, journals, scientists, publications, and scientific terms relate to each other. Science mapping is the body of methods and techniques that have been developed for generating science maps. This entry is an introduction to science maps and science mapping. It focuses on the conceptual, theoretical, and methodological issues of science mapping, rather than on the mathematical formulation of science mapping techniques. After a brief history of science mapping, we describe the general procedure for building a science map, presenting the data sources and the methods to select, clean, and pre-process the data. Next, we examine in detail how the most common types of science maps, namely the citation-based and the term-based, are generated. Both are based on networks: the former on the network of publications connected by citations, the latter on the network of terms co-occurring in publications. We review the rationale behind these mapping approaches, as well as the techniques and methods to build the maps (from the extraction of the network to the visualization and enrichment of the map). We also present less-common types of science maps, including co-authorship networks, interlocking editorship networks, maps based on patents' data, and geographic maps of science. Moreover, we consider how time can be represented in science maps to investigate the dynamics of science. We also discuss some epistemological and sociological topics that can help in the interpretation, contextualization, and assessment of science maps. Then, we present some possible applications of science maps in science policy. In the conclusion, we point out why science mapping may be interesting for all the branches of meta-science, from knowl­edge organization to epistemology.
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.04
    0.040246032 = product of:
      0.080492064 = sum of:
        0.080492064 = product of:
          0.24147618 = sum of:
            0.24147618 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.24147618 = score(doc=862,freq=2.0), product of:
                0.42965913 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050679237 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  3. Busch, A.: Terminologiemanagement : erfolgreicher Wissenstransfer durch Concept-Maps und die Überlegungen in DGI-AKTS (2021) 0.04
    0.035360713 = product of:
      0.070721425 = sum of:
        0.070721425 = product of:
          0.14144285 = sum of:
            0.14144285 = weight(_text_:maps in 422) [ClassicSimilarity], result of:
              0.14144285 = score(doc=422,freq=2.0), product of:
                0.28477904 = queryWeight, product of:
                  5.619245 = idf(docFreq=435, maxDocs=44218)
                  0.050679237 = queryNorm
                0.4966758 = fieldWeight in 422, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.619245 = idf(docFreq=435, maxDocs=44218)
                  0.0625 = fieldNorm(doc=422)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  4. Moore, S.M.; Kiser, T.; Hodge, C.: Classification of print-based cartographic materials : a survey and analysis (2022) 0.04
    0.035360713 = product of:
      0.070721425 = sum of:
        0.070721425 = product of:
          0.14144285 = sum of:
            0.14144285 = weight(_text_:maps in 1109) [ClassicSimilarity], result of:
              0.14144285 = score(doc=1109,freq=2.0), product of:
                0.28477904 = queryWeight, product of:
                  5.619245 = idf(docFreq=435, maxDocs=44218)
                  0.050679237 = queryNorm
                0.4966758 = fieldWeight in 1109, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.619245 = idf(docFreq=435, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1109)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper examines the predominant systems used for the classification of print-based cartographic materials (primarily atlases and sheet maps). We present the results of a brief, widely distributed survey on the topic, followed by discussions of the distinctive characteristics of the classification systems used by survey respondents. The Library of Congress Classification and Dewey Decimal Classification systems were found to be widely used, with several other schemes also in use.
  5. Manzoni, L.: Nuovo Soggettario and semantic indexing of cartographic resources in Italy : an exploratory study (2022) 0.04
    0.035360713 = product of:
      0.070721425 = sum of:
        0.070721425 = product of:
          0.14144285 = sum of:
            0.14144285 = weight(_text_:maps in 1138) [ClassicSimilarity], result of:
              0.14144285 = score(doc=1138,freq=2.0), product of:
                0.28477904 = queryWeight, product of:
                  5.619245 = idf(docFreq=435, maxDocs=44218)
                  0.050679237 = queryNorm
                0.4966758 = fieldWeight in 1138, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.619245 = idf(docFreq=435, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1138)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The paper focuses on the potential use of Nuovo soggettario, the semantic indexing tool adopted by the National Central Library of Florence (Biblioteca nazionale centrale di Firenze), for indexing cartographic resources. Particular attention is paid to the treatment of place names, the use of formal subjects, and the different ways of constructing subject strings for general and thematic maps.
  6. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.03
    0.03353836 = product of:
      0.06707672 = sum of:
        0.06707672 = product of:
          0.20123015 = sum of:
            0.20123015 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.20123015 = score(doc=5669,freq=2.0), product of:
                0.42965913 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050679237 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  7. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.03
    0.03353836 = product of:
      0.06707672 = sum of:
        0.06707672 = product of:
          0.20123015 = sum of:
            0.20123015 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.20123015 = score(doc=1000,freq=2.0), product of:
                0.42965913 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050679237 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  8. Williams, B.: Dimensions & VOSViewer bibliometrics in the reference interview (2020) 0.03
    0.030940626 = product of:
      0.06188125 = sum of:
        0.06188125 = product of:
          0.1237625 = sum of:
            0.1237625 = weight(_text_:maps in 5719) [ClassicSimilarity], result of:
              0.1237625 = score(doc=5719,freq=2.0), product of:
                0.28477904 = queryWeight, product of:
                  5.619245 = idf(docFreq=435, maxDocs=44218)
                  0.050679237 = queryNorm
                0.43459132 = fieldWeight in 5719, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.619245 = idf(docFreq=435, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5719)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The VOSviewer software provides easy access to bibliometric mapping using data from Dimensions, Scopus and Web of Science. The properly formatted and structured citation data, and the ease in which it can be exported open up new avenues for use during citation searches and eference interviews. This paper details specific techniques for using advanced searches in Dimensions, exporting the citation data, and drawing insights from the maps produced in VOS Viewer. These search techniques and data export practices are fast and accurate enough to build into reference interviews for graduate students, faculty, and post-PhD researchers. The search results derived from them are accurate and allow a more comprehensive view of citation networks embedded in ordinary complex boolean searches.
  9. Hausser, R.: Language and nonlanguage cognition (2021) 0.03
    0.026520537 = product of:
      0.053041074 = sum of:
        0.053041074 = product of:
          0.10608215 = sum of:
            0.10608215 = weight(_text_:maps in 255) [ClassicSimilarity], result of:
              0.10608215 = score(doc=255,freq=2.0), product of:
                0.28477904 = queryWeight, product of:
                  5.619245 = idf(docFreq=435, maxDocs=44218)
                  0.050679237 = queryNorm
                0.37250686 = fieldWeight in 255, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.619245 = idf(docFreq=435, maxDocs=44218)
                  0.046875 = fieldNorm(doc=255)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A basic distinction in agent-based data-driven Database Semantics (DBS) is between language and nonlanguage cognition. Language cognition transfers content between agents by means of raw data. Nonlanguage cognition maps between content and raw data inside the focus agent. {\it Recognition} applies a concept type to raw data, resulting in a concept token. In language recognition, the focus agent (hearer) takes raw language-data (surfaces) produced by another agent (speaker) as input, while nonlanguage recognition takes raw nonlanguage-data as input. In either case, the output is a content which is stored in the agent's onboard short term memory. {\it Action} adapts a concept type to a purpose, resulting in a token. In language action, the focus agent (speaker) produces language-dependent surfaces for another agent (hearer), while nonlanguage action produces intentions for a nonlanguage purpose. In either case, the output is raw action data. As long as the procedural implementation of place holder values works properly, it is compatible with the DBS requirement of input-output equivalence between the natural prototype and the artificial reconstruction.
  10. ¬Der Student aus dem Computer (2023) 0.02
    0.02403218 = product of:
      0.04806436 = sum of:
        0.04806436 = product of:
          0.09612872 = sum of:
            0.09612872 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.09612872 = score(doc=1079,freq=2.0), product of:
                0.17747006 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679237 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 1.2023 16:22:55
  11. Zhang, P.; Soergel, D.: Cognitive mechanisms in sensemaking : a qualitative user study (2020) 0.02
    0.022100445 = product of:
      0.04420089 = sum of:
        0.04420089 = product of:
          0.08840178 = sum of:
            0.08840178 = weight(_text_:maps in 5614) [ClassicSimilarity], result of:
              0.08840178 = score(doc=5614,freq=2.0), product of:
                0.28477904 = queryWeight, product of:
                  5.619245 = idf(docFreq=435, maxDocs=44218)
                  0.050679237 = queryNorm
                0.31042236 = fieldWeight in 5614, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.619245 = idf(docFreq=435, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5614)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Throughout an information search, a user needs to make sense of the information found to create an understanding. This requires cognitive effort that can be demanding. Building on prior sensemaking models and expanding them with ideas from learning and cognitive psychology, we examined the use of cognitive mechanisms during individual sensemaking. We conducted a qualitative user study of 15 students who searched for and made sense of information for business analysis and news writing tasks. Through the analysis of think-aloud protocols, recordings of screen movements, intermediate work products of sensemaking, including notes and concept maps, and final reports, we observed the use of 17 data-driven and structure-driven mechanisms for processing new information, examining individual concepts and relationships, and detecting anomalies. These cognitive mechanisms, as the basic operators that move sensemaking forward, provide in-depth understanding of how people process information to produce sense. Meaningful learning and sensemaking are closely related, so our findings apply to learning as well. Our results contribute to a better understanding of the sensemaking process-how people think-and this better understanding can inform the teaching of thinking skills and the design of improved sensemaking assistants and mind tools.
  12. Zimmerman, M.S.: Mapping literacies : comparing information horizons mapping to measures of information and health literacy (2020) 0.02
    0.022100445 = product of:
      0.04420089 = sum of:
        0.04420089 = product of:
          0.08840178 = sum of:
            0.08840178 = weight(_text_:maps in 5711) [ClassicSimilarity], result of:
              0.08840178 = score(doc=5711,freq=2.0), product of:
                0.28477904 = queryWeight, product of:
                  5.619245 = idf(docFreq=435, maxDocs=44218)
                  0.050679237 = queryNorm
                0.31042236 = fieldWeight in 5711, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.619245 = idf(docFreq=435, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5711)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose Information literacy and health literacy skills are positively correlated with indicators of quality of life. Assessing these literacies, however, can be daunting - particularly with people that may not respond well to prose-based tools. The purpose of this paper is to use information horizons methodology as a metric that may be reflective of literacies. Design/methodology/approach Following a power analysis to insure statistical significance, a sample of 161 participants was recruited from a university population and given formal, vetted measures of information literacy and health literacy and then was asked to create an information horizons map within a health-related context. The information horizons maps were evaluated in two different ways. First, the number of sources was counted. Then, the quality of sources was factored in. Multiple regression analysis was applied to both metrics as independent variables with the other assessments as dependent variables. Anker, Reinhart, and Feeley's model provided the conceptual framework for the study. Findings Information horizons mapping was not found to have a significant relationship with measures of information literacy. However, there were strong, statistically significant relationships with the measures of health literacy employed in this study. Originality/value Employing information horizons methodology as a means of providing a metric to assess literacies may be helpful in providing a more complete picture of a person's abilities. While the current assessment tools have value, this method has the potential to provide important information about the health literacy of people who are not traditionally well represented by prose-based measures.
  13. Jaeger, L.: Wissenschaftler versus Wissenschaft (2020) 0.02
    0.020599011 = product of:
      0.041198023 = sum of:
        0.041198023 = product of:
          0.082396045 = sum of:
            0.082396045 = weight(_text_:22 in 4156) [ClassicSimilarity], result of:
              0.082396045 = score(doc=4156,freq=2.0), product of:
                0.17747006 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679237 = queryNorm
                0.46428138 = fieldWeight in 4156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4156)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    2. 3.2020 14:08:22
  14. Ibrahim, G.M.; Taylor, M.: Krebszellen manipulieren Neurone : Gliome (2023) 0.02
    0.020599011 = product of:
      0.041198023 = sum of:
        0.041198023 = product of:
          0.082396045 = sum of:
            0.082396045 = weight(_text_:22 in 1203) [ClassicSimilarity], result of:
              0.082396045 = score(doc=1203,freq=2.0), product of:
                0.17747006 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679237 = queryNorm
                0.46428138 = fieldWeight in 1203, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1203)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Spektrum der Wissenschaft. 2023, H.10, S.22-24
  15. Aral, S.: ¬The hype machine : how social media disrupts our elections, our economy, and our health - and how we must adapt (2020) 0.02
    0.017680356 = product of:
      0.035360713 = sum of:
        0.035360713 = product of:
          0.070721425 = sum of:
            0.070721425 = weight(_text_:maps in 550) [ClassicSimilarity], result of:
              0.070721425 = score(doc=550,freq=2.0), product of:
                0.28477904 = queryWeight, product of:
                  5.619245 = idf(docFreq=435, maxDocs=44218)
                  0.050679237 = queryNorm
                0.2483379 = fieldWeight in 550, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.619245 = idf(docFreq=435, maxDocs=44218)
                  0.03125 = fieldNorm(doc=550)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Social media connected the world--and gave rise to fake news and increasing polarization. Now a leading researcher at MIT draws on 20 years of research to show how these trends threaten our political, economic, and emotional health in this eye-opening exploration of the dark side of technological progress. Today we have the ability, unprecedented in human history, to amplify our interactions with each other through social media. It is paramount, MIT social media expert Sinan Aral says, that we recognize the outsized impact social media has on our culture, our democracy, and our lives in order to steer today's social technology toward good, while avoiding the ways it can pull us apart. Otherwise, we could fall victim to what Aral calls "The Hype Machine." As a senior researcher of the longest-running study of fake news ever conducted, Aral found that lies spread online farther and faster than the truth--a harrowing conclusion that was featured on the cover of Science magazine. Among the questions Aral explores following twenty years of field research: Did Russian interference change the 2016 election? And how is it affecting the vote in 2020? Why does fake news travel faster than the truth online? How do social ratings and automated sharing determine which products succeed and fail? How does social media affect our kids? First, Aral links alarming data and statistics to three accelerating social media shifts: hyper-socialization, personalized mass persuasion, and the tyranny of trends. Next, he grapples with the consequences of the Hype Machine for elections, businesses, dating, and health. Finally, he maps out strategies for navigating the Hype Machine, offering his singular guidance for managing social media to fulfill its promise going forward. Rarely has a book so directly wrestled with the secret forces that drive the news cycle every day"
  16. Koch, C.: Was ist Bewusstsein? (2020) 0.02
    0.017165843 = product of:
      0.034331687 = sum of:
        0.034331687 = product of:
          0.06866337 = sum of:
            0.06866337 = weight(_text_:22 in 5723) [ClassicSimilarity], result of:
              0.06866337 = score(doc=5723,freq=2.0), product of:
                0.17747006 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679237 = queryNorm
                0.38690117 = fieldWeight in 5723, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5723)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    17. 1.2020 22:15:11
  17. Wagner, E.: Über Impfstoffe zur digitalen Identität? (2020) 0.02
    0.017165843 = product of:
      0.034331687 = sum of:
        0.034331687 = product of:
          0.06866337 = sum of:
            0.06866337 = weight(_text_:22 in 5846) [ClassicSimilarity], result of:
              0.06866337 = score(doc=5846,freq=2.0), product of:
                0.17747006 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679237 = queryNorm
                0.38690117 = fieldWeight in 5846, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5846)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    4. 5.2020 17:22:40
  18. Engel, B.: Corona-Gesundheitszertifikat als Exitstrategie (2020) 0.02
    0.017165843 = product of:
      0.034331687 = sum of:
        0.034331687 = product of:
          0.06866337 = sum of:
            0.06866337 = weight(_text_:22 in 5906) [ClassicSimilarity], result of:
              0.06866337 = score(doc=5906,freq=2.0), product of:
                0.17747006 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679237 = queryNorm
                0.38690117 = fieldWeight in 5906, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5906)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    4. 5.2020 17:22:28
  19. Arndt, O.: Totale Telematik (2020) 0.02
    0.017165843 = product of:
      0.034331687 = sum of:
        0.034331687 = product of:
          0.06866337 = sum of:
            0.06866337 = weight(_text_:22 in 5907) [ClassicSimilarity], result of:
              0.06866337 = score(doc=5907,freq=2.0), product of:
                0.17747006 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679237 = queryNorm
                0.38690117 = fieldWeight in 5907, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5907)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2020 19:11:24
  20. Arndt, O.: Erosion der bürgerlichen Freiheiten (2020) 0.02
    0.017165843 = product of:
      0.034331687 = sum of:
        0.034331687 = product of:
          0.06866337 = sum of:
            0.06866337 = weight(_text_:22 in 82) [ClassicSimilarity], result of:
              0.06866337 = score(doc=82,freq=2.0), product of:
                0.17747006 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679237 = queryNorm
                0.38690117 = fieldWeight in 82, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=82)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2020 19:16:24

Languages

  • e 82
  • d 30

Types

  • a 104
  • el 21
  • m 3
  • p 3
  • x 1
  • More… Less…