Search (23 results, page 1 of 2)

  • × type_ss:"el"
  • × year_i:[2020 TO 2030}
  1. Tay, A.: ¬The next generation discovery citation indexes : a review of the landscape in 2020 (2020) 0.05
    0.053932853 = product of:
      0.107865706 = sum of:
        0.107865706 = sum of:
          0.05872144 = weight(_text_:indexing in 40) [ClassicSimilarity], result of:
            0.05872144 = score(doc=40,freq=2.0), product of:
              0.19835205 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.051817898 = queryNorm
              0.29604656 = fieldWeight in 40, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.0546875 = fieldNorm(doc=40)
          0.049144268 = weight(_text_:22 in 40) [ClassicSimilarity], result of:
            0.049144268 = score(doc=40,freq=2.0), product of:
              0.18145745 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051817898 = queryNorm
              0.2708308 = fieldWeight in 40, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=40)
      0.5 = coord(1/2)
    
    Date
    17.11.2020 12:22:59
    Theme
    Citation indexing
  2. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.03
    0.0342919 = product of:
      0.0685838 = sum of:
        0.0685838 = product of:
          0.2057514 = sum of:
            0.2057514 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.2057514 = score(doc=5669,freq=2.0), product of:
                0.43931273 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.051817898 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  3. Jaeger, L.: Wissenschaftler versus Wissenschaft (2020) 0.02
    0.021061828 = product of:
      0.042123657 = sum of:
        0.042123657 = product of:
          0.08424731 = sum of:
            0.08424731 = weight(_text_:22 in 4156) [ClassicSimilarity], result of:
              0.08424731 = score(doc=4156,freq=2.0), product of:
                0.18145745 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051817898 = queryNorm
                0.46428138 = fieldWeight in 4156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4156)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    2. 3.2020 14:08:22
  4. Suominen, O.; Koskenniemi, I.: Annif Analyzer Shootout : comparing text lemmatization methods for automated subject indexing (2022) 0.02
    0.018162236 = product of:
      0.03632447 = sum of:
        0.03632447 = product of:
          0.07264894 = sum of:
            0.07264894 = weight(_text_:indexing in 658) [ClassicSimilarity], result of:
              0.07264894 = score(doc=658,freq=6.0), product of:
                0.19835205 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.051817898 = queryNorm
                0.3662626 = fieldWeight in 658, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=658)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Automated text classification is an important function for many AI systems relevant to libraries, including automated subject indexing and classification. When implemented using the traditional natural language processing (NLP) paradigm, one key part of the process is the normalization of words using stemming or lemmatization, which reduces the amount of linguistic variation and often improves the quality of classification. In this paper, we compare the output of seven different text lemmatization algorithms as well as two baseline methods. We measure how the choice of method affects the quality of text classification using example corpora in three languages. The experiments have been performed using the open source Annif toolkit for automated subject indexing and classification, but should generalize also to other NLP toolkits and similar text classification tasks. The results show that lemmatization methods in most cases outperform baseline methods in text classification particularly for Finnish and Swedish text, but not English, where baseline methods are most effective. The differences between lemmatization methods are quite small. The systematic comparison will help optimize text classification pipelines and inform the further development of the Annif toolkit to incorporate a wider choice of normalization methods.
  5. Wagner, E.: Über Impfstoffe zur digitalen Identität? (2020) 0.02
    0.017551525 = product of:
      0.03510305 = sum of:
        0.03510305 = product of:
          0.0702061 = sum of:
            0.0702061 = weight(_text_:22 in 5846) [ClassicSimilarity], result of:
              0.0702061 = score(doc=5846,freq=2.0), product of:
                0.18145745 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051817898 = queryNorm
                0.38690117 = fieldWeight in 5846, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5846)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    4. 5.2020 17:22:40
  6. Engel, B.: Corona-Gesundheitszertifikat als Exitstrategie (2020) 0.02
    0.017551525 = product of:
      0.03510305 = sum of:
        0.03510305 = product of:
          0.0702061 = sum of:
            0.0702061 = weight(_text_:22 in 5906) [ClassicSimilarity], result of:
              0.0702061 = score(doc=5906,freq=2.0), product of:
                0.18145745 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051817898 = queryNorm
                0.38690117 = fieldWeight in 5906, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5906)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    4. 5.2020 17:22:28
  7. Arndt, O.: Totale Telematik (2020) 0.02
    0.017551525 = product of:
      0.03510305 = sum of:
        0.03510305 = product of:
          0.0702061 = sum of:
            0.0702061 = weight(_text_:22 in 5907) [ClassicSimilarity], result of:
              0.0702061 = score(doc=5907,freq=2.0), product of:
                0.18145745 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051817898 = queryNorm
                0.38690117 = fieldWeight in 5907, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5907)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2020 19:11:24
  8. Arndt, O.: Erosion der bürgerlichen Freiheiten (2020) 0.02
    0.017551525 = product of:
      0.03510305 = sum of:
        0.03510305 = product of:
          0.0702061 = sum of:
            0.0702061 = weight(_text_:22 in 82) [ClassicSimilarity], result of:
              0.0702061 = score(doc=82,freq=2.0), product of:
                0.18145745 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051817898 = queryNorm
                0.38690117 = fieldWeight in 82, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=82)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2020 19:16:24
  9. Baecker, D.: ¬Der Frosch, die Fliege und der Mensch : zum Tod von Humberto Maturana (2021) 0.02
    0.017551525 = product of:
      0.03510305 = sum of:
        0.03510305 = product of:
          0.0702061 = sum of:
            0.0702061 = weight(_text_:22 in 236) [ClassicSimilarity], result of:
              0.0702061 = score(doc=236,freq=2.0), product of:
                0.18145745 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051817898 = queryNorm
                0.38690117 = fieldWeight in 236, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=236)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    7. 5.2021 22:10:24
  10. Eyert, F.: Mathematische Wissenschaftskommunikation in der digitalen Gesellschaft (2023) 0.02
    0.017551525 = product of:
      0.03510305 = sum of:
        0.03510305 = product of:
          0.0702061 = sum of:
            0.0702061 = weight(_text_:22 in 1001) [ClassicSimilarity], result of:
              0.0702061 = score(doc=1001,freq=2.0), product of:
                0.18145745 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051817898 = queryNorm
                0.38690117 = fieldWeight in 1001, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1001)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Mitteilungen der Deutschen Mathematiker-Vereinigung. 2023, H.1, S.22-25
  11. Hauff-Hartig, S.: Automatische Transkription von Videos : Fernsehen 3.0: Automatisierte Sentimentanalyse und Zusammenstellung von Kurzvideos mit hohem Aufregungslevel KI-generierte Metadaten: Von der Technologiebeobachtung bis zum produktiven Einsatz (2021) 0.01
    0.014041219 = product of:
      0.028082438 = sum of:
        0.028082438 = product of:
          0.056164876 = sum of:
            0.056164876 = weight(_text_:22 in 251) [ClassicSimilarity], result of:
              0.056164876 = score(doc=251,freq=2.0), product of:
                0.18145745 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051817898 = queryNorm
                0.30952093 = fieldWeight in 251, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=251)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 5.2021 12:43:05
  12. Hauff-Hartig, S.: Wissensrepräsentation durch RDF: Drei angewandte Forschungsbeispiele : Bitte recht vielfältig: Wie Wissensgraphen, Disco und FaBiO Struktur in Mangas und die Humanities bringen (2021) 0.01
    0.014041219 = product of:
      0.028082438 = sum of:
        0.028082438 = product of:
          0.056164876 = sum of:
            0.056164876 = weight(_text_:22 in 318) [ClassicSimilarity], result of:
              0.056164876 = score(doc=318,freq=2.0), product of:
                0.18145745 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051817898 = queryNorm
                0.30952093 = fieldWeight in 318, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=318)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 5.2021 12:43:05
  13. Schrenk, P.: Gesamtnote 1 für Signal - Telegram-Defizite bei Sicherheit und Privatsphäre : Signal und Telegram im Test (2022) 0.01
    0.014041219 = product of:
      0.028082438 = sum of:
        0.028082438 = product of:
          0.056164876 = sum of:
            0.056164876 = weight(_text_:22 in 486) [ClassicSimilarity], result of:
              0.056164876 = score(doc=486,freq=2.0), product of:
                0.18145745 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051817898 = queryNorm
                0.30952093 = fieldWeight in 486, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=486)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2022 14:01:14
  14. Bager, J.: ¬Die Text-KI ChatGPT schreibt Fachtexte, Prosa, Gedichte und Programmcode (2023) 0.01
    0.014041219 = product of:
      0.028082438 = sum of:
        0.028082438 = product of:
          0.056164876 = sum of:
            0.056164876 = weight(_text_:22 in 835) [ClassicSimilarity], result of:
              0.056164876 = score(doc=835,freq=2.0), product of:
                0.18145745 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051817898 = queryNorm
                0.30952093 = fieldWeight in 835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=835)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    29.12.2022 18:22:55
  15. Rieger, F.: Lügende Computer (2023) 0.01
    0.014041219 = product of:
      0.028082438 = sum of:
        0.028082438 = product of:
          0.056164876 = sum of:
            0.056164876 = weight(_text_:22 in 912) [ClassicSimilarity], result of:
              0.056164876 = score(doc=912,freq=2.0), product of:
                0.18145745 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051817898 = queryNorm
                0.30952093 = fieldWeight in 912, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=912)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    16. 3.2023 19:22:55
  16. Broughton, V.: Faceted classification in support of diversity : the role of concepts and terms in representing religion (2020) 0.01
    0.012583166 = product of:
      0.025166333 = sum of:
        0.025166333 = product of:
          0.050332665 = sum of:
            0.050332665 = weight(_text_:indexing in 5992) [ClassicSimilarity], result of:
              0.050332665 = score(doc=5992,freq=2.0), product of:
                0.19835205 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.051817898 = queryNorm
                0.2537542 = fieldWeight in 5992, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5992)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    ¬The Indexer: the international journal of indexing. 38(2020) no.3, S.247-270
  17. Daquino, M.; Peroni, S.; Shotton, D.; Colavizza, G.; Ghavimi, B.; Lauscher, A.; Mayr, P.; Romanello, M.; Zumstein, P.: ¬The OpenCitations Data Model (2020) 0.01
    0.012583166 = product of:
      0.025166333 = sum of:
        0.025166333 = product of:
          0.050332665 = sum of:
            0.050332665 = weight(_text_:indexing in 38) [ClassicSimilarity], result of:
              0.050332665 = score(doc=38,freq=2.0), product of:
                0.19835205 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.051817898 = queryNorm
                0.2537542 = fieldWeight in 38, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=38)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Theme
    Citation indexing
  18. Almeida, P. de; Gnoli, C.: Fiction in a phenomenon-based classification (2021) 0.01
    0.012583166 = product of:
      0.025166333 = sum of:
        0.025166333 = product of:
          0.050332665 = sum of:
            0.050332665 = weight(_text_:indexing in 712) [ClassicSimilarity], result of:
              0.050332665 = score(doc=712,freq=2.0), product of:
                0.19835205 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.051817898 = queryNorm
                0.2537542 = fieldWeight in 712, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=712)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In traditional classification, fictional works are indexed only by their form, genre, and language, while their subject content is believed to be irrelevant. However, recent research suggests that this may not be the best approach. We tested indexing of a small sample of selected fictional works by Integrative Levels Classification (ILC2), a freely faceted system based on phenomena instead of disciplines and considered the structure of the resulting classmarks. Issues in the process of subject analysis, such as selection of relevant vs. non-relevant themes and citation order of relevant ones, are identified and discussed. Some phenomena that are covered in scholarly literature can also be identified as relevant themes in fictional literature and expressed in classmarks. This can allow for hybrid search and retrieval systems covering both fiction and nonfiction, which will result in better leveraging of the knowledge contained in fictional works.
  19. Krüger, N.; Pianos, T.: Lernmaterialien für junge Forschende in den Wirtschaftswissenschaften als Open Educational Resources (OER) (2021) 0.01
    0.012286067 = product of:
      0.024572134 = sum of:
        0.024572134 = product of:
          0.049144268 = sum of:
            0.049144268 = weight(_text_:22 in 252) [ClassicSimilarity], result of:
              0.049144268 = score(doc=252,freq=2.0), product of:
                0.18145745 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051817898 = queryNorm
                0.2708308 = fieldWeight in 252, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=252)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 5.2021 12:43:05
  20. Sewing, S.: Bestandserhaltung und Archivierung : Koordinierung auf der Basis eines gemeinsamen Metadatenformates in den deutschen und österreichischen Bibliotheksverbünden (2021) 0.01
    0.010530914 = product of:
      0.021061828 = sum of:
        0.021061828 = product of:
          0.042123657 = sum of:
            0.042123657 = weight(_text_:22 in 266) [ClassicSimilarity], result of:
              0.042123657 = score(doc=266,freq=2.0), product of:
                0.18145745 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051817898 = queryNorm
                0.23214069 = fieldWeight in 266, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=266)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 5.2021 12:43:05