Search (1206 results, page 1 of 61)

  • × year_i:[2020 TO 2030}
  1. Lopes Martins, D.; Silva Lemos, D.L. da; Rosa de Oliveira, L.F.; Siqueira, J.; Carmo, D. do; Nunes Medeiros, V.: Information organization and representation in digital cultural heritage in Brazil : systematic mapping of information infrastructure in digital collections for data science applications (2023) 0.09
    0.09181792 = product of:
      0.122423895 = sum of:
        0.054294456 = weight(_text_:da in 968) [ClassicSimilarity], result of:
          0.054294456 = score(doc=968,freq=2.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.26506406 = fieldWeight in 968, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.0390625 = fieldNorm(doc=968)
        0.063981645 = product of:
          0.12796329 = sum of:
            0.12796329 = weight(_text_:silva in 968) [ClassicSimilarity], result of:
              0.12796329 = score(doc=968,freq=2.0), product of:
                0.31446302 = queryWeight, product of:
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.04269026 = queryNorm
                0.40692633 = fieldWeight in 968, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=968)
          0.5 = coord(1/2)
        0.004147791 = product of:
          0.008295582 = sum of:
            0.008295582 = weight(_text_:a in 968) [ClassicSimilarity], result of:
              0.008295582 = score(doc=968,freq=14.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.1685276 = fieldWeight in 968, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=968)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    This paper focuses on data science in digital cultural heritage in Brazil, where there is a lack of systematized information and curated databases for the integrated organization of documentary knowledge. Thus, the aim was to systematically map the different forms of information organization and representation applied to objects from collections belonging to institutions affiliated with the federal government's Special Department of Culture. This diagnosis is then used to discuss the requirements of devising strategies that favor a better data science information infrastructure to reuse information on Brazil's cultural heritage. Content analysis was used to identify analytical categories and obtain a broader understanding of the documentary sources of these institutions in order to extract, analyze, and interpret the data involved. A total of 215 hyperlinks that can be considered cultural collections of the institutions studied were identified, representing 2,537,921 cultural heritage items. The results show that the online publication of Brazil's digital cultural heritage is limited in terms of technology, copyright licensing, and established information organization practices. This paper provides a conceptual and analytical view to discuss the requirements for formulating strategies aimed at building a data science information infrastructure of Brazilian digital cultural collections that serves as future projects.
    Type
    a
  2. Almeida, M.B.: Ontologia em Ciência da Informação: Teoria e Método (1ª ed., Vol. 1). CRV. http://dx.doi.org/10.24824/978655578679.8; Tecnologia e Aplicações (1ª ed., Vol. 2). CRV. http://dx.doi.org/10.24824/978652511477.4; Curso completo com teoria e exercícios (1ª ed., volume suplementar para professores). CRV. [Review] (2022) 0.08
    0.0816775 = product of:
      0.163355 = sum of:
        0.15959246 = weight(_text_:da in 631) [ClassicSimilarity], result of:
          0.15959246 = score(doc=631,freq=12.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.7791261 = fieldWeight in 631, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.046875 = fieldNorm(doc=631)
        0.0037625222 = product of:
          0.0075250445 = sum of:
            0.0075250445 = weight(_text_:a in 631) [ClassicSimilarity], result of:
              0.0075250445 = score(doc=631,freq=8.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.15287387 = fieldWeight in 631, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=631)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Nos últimos 30 anos, o tema das ontologias tem sido um dos terrenos mais férteis de investigação na área da Organização do Conhecimento. É um tema complexo e revestido de polémica, pela dificuldade na definição do próprio conceito e pelas apropriações que diferentes campos científicos têm exercido sobre ele. Com origem no domínio da filosofia, a ontologia é hoje um território partilhado pelas Ciências da Computação, com destaque para a Ciência dos Dados (Data Science), e pela Ciência da Informação, particularmente pela Organização do Conhecimento. São raros os autores desta área que não escreveram sobre o tema, abordando as suas fronteiras conceptuais ou discutindo a relação das ontologias com outros sistemas de organização do conhecimento, como as taxonomias, os tesauros ou as classificações.
    Source
    Boletim do Arquivo da Universidade de Coimbra 35(2022) no.1, S.191-198
    Type
    a
  3. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.07
    0.06664342 = product of:
      0.13328683 = sum of:
        0.056502875 = product of:
          0.16950862 = sum of:
            0.16950862 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.16950862 = score(doc=5669,freq=2.0), product of:
                0.3619285 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04269026 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
        0.07678396 = weight(_text_:da in 5669) [ClassicSimilarity], result of:
          0.07678396 = score(doc=5669,freq=4.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.37485722 = fieldWeight in 5669, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5669)
      0.5 = coord(2/4)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  4. Zhang, D.; Wu, C.: What online review features really matter? : an explainable deep learning approach for hotel demand forecasting (2023) 0.06
    0.062623106 = product of:
      0.12524621 = sum of:
        0.12140611 = weight(_text_:da in 1039) [ClassicSimilarity], result of:
          0.12140611 = score(doc=1039,freq=10.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.5927013 = fieldWeight in 1039, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1039)
        0.0038401082 = product of:
          0.0076802163 = sum of:
            0.0076802163 = weight(_text_:a in 1039) [ClassicSimilarity], result of:
              0.0076802163 = score(doc=1039,freq=12.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.15602624 = fieldWeight in 1039, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1039)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Accurate demand forecasting plays a critical role in hotel revenue management. Online reviews have emerged as a viable information source for hotel demand forecasting. However, existing hotel demand forecasting studies leverage only sentiment information from online reviews, leading to capturing insufficient information. Furthermore, prevailing hotel demand forecasting methods either lack explainability or fail to capture local correlations within sequences. In this study, we (1) propose a comprehensive framework consisting of four components: expertise, sentiment, popularity, and novelty (ESPN framework), to investigate the impact of online reviews on hotel demand forecasting; (2) propose a novel dual attention-based long short-term memory convolutional neural network (DA-LSTM-CNN) model to optimize the model effectiveness. We collected online review data from Ctrip.com to evaluate our proposed ESPN framework and DA-LSTM-CNN model. The empirical results show that incorporating features derived from the ESPN improves forecasting accuracy and our DA-LSTM-CNN significantly outperforms the state-of-the-art models. Further, we use a case study to illustrate the explainability of the DA-LSTM-CNN, which could guide future setups for hotel demand forecasting systems. We discuss how stakeholders can benefit from our proposed ESPN framework and DA-LSTM-CNN model.
    Type
    a
  5. Bischoff, M.: ¬Das grosse Experiment (2021) 0.06
    0.055862173 = product of:
      0.11172435 = sum of:
        0.10858891 = weight(_text_:da in 329) [ClassicSimilarity], result of:
          0.10858891 = score(doc=329,freq=2.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.5301281 = fieldWeight in 329, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.078125 = fieldNorm(doc=329)
        0.0031354348 = product of:
          0.0062708696 = sum of:
            0.0062708696 = weight(_text_:a in 329) [ClassicSimilarity], result of:
              0.0062708696 = score(doc=329,freq=2.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.12739488 = fieldWeight in 329, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=329)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Zahlreiche konkurrierende wissenschaftliche Theorien versuchen, das Bewusstsein zu beschreiben. In einer noch nie da gewesenen Kollaboration finden nun weltweit Versuche an hunderten Probanden statt, um zwei der führenden Ansätze auf den Prüfstand zu stellen.
    Type
    a
  6. Scheven, E.: Qualitätssicherung in der GND (2021) 0.05
    0.051809754 = product of:
      0.10361951 = sum of:
        0.065153345 = weight(_text_:da in 314) [ClassicSimilarity], result of:
          0.065153345 = score(doc=314,freq=2.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.31807688 = fieldWeight in 314, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.046875 = fieldNorm(doc=314)
        0.038466163 = sum of:
          0.0037625222 = weight(_text_:a in 314) [ClassicSimilarity], result of:
            0.0037625222 = score(doc=314,freq=2.0), product of:
              0.049223874 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.04269026 = queryNorm
              0.07643694 = fieldWeight in 314, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=314)
          0.034703642 = weight(_text_:22 in 314) [ClassicSimilarity], result of:
            0.034703642 = score(doc=314,freq=2.0), product of:
              0.149494 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04269026 = queryNorm
              0.23214069 = fieldWeight in 314, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=314)
      0.5 = coord(2/4)
    
    Abstract
    Was mag das Akronym GND bedeuten? Lassen wir der Fantasie freien Lauf, kommen wir auf Auflösungen wie Golfer nehmen Datteln, Gerne noch Details, Glück nach Dauerstress, Größter Nutzen Deutschlands und vieles mehr. Eine ernsthaftere Recherche führt zur Gesamtnutzungsdauer oder auf einen Sachverhalt der Elektrotechnik: Die von einer Stromquelle bereitgestellte Spannung bezieht sich stets auf ein Grundniveau. Dieses Grundniveau wird auf Deutsch als Masse, im Englischen aber als ground oder GND bezeichnet. Techniker kennen das Schaltzeichen dafür: Für den informationswissenschaftlichen Bereich steht dagegen GND für die Gemeinsame Normdatei. Auch sie hat (seit 2020) ein Zeichen. Da die Gemeinsame Normdatei (im weiteren Text nur noch GND) auch ein Instrument der Inhaltserschließung ist, beeinflussen ihre Stärken und Schwächen die Qualität der Inhaltserschließung. Deshalb widmet sich dieser Artikel der Qualitätssicherung in der GND.
    Date
    23. 9.2021 19:12:22
    Type
    a
  7. Lima, G.A. de; Castro, I.R.: Uso da classificacao decimal universal para a recuperacao da informacao em ambientes digitas : uma revisao sistematica da literatura (2021) 0.05
    0.04909428 = product of:
      0.09818856 = sum of:
        0.09404077 = weight(_text_:da in 760) [ClassicSimilarity], result of:
          0.09404077 = score(doc=760,freq=6.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.45910448 = fieldWeight in 760, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.0390625 = fieldNorm(doc=760)
        0.004147791 = product of:
          0.008295582 = sum of:
            0.008295582 = weight(_text_:a in 760) [ClassicSimilarity], result of:
              0.008295582 = score(doc=760,freq=14.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.1685276 = fieldWeight in 760, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=760)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Knowledge Organization Systems, even traditional ones, such as the Universal Decimal Classification, have been studied to improve the retrieval of information online, although the potential of using knowledge structures in the user interface has not yet been widespread. Objective: This study presents a mapping of scientific production on information retrieval methodologies, which make use of the Universal Decimal Classification. Methodology: Systematic Literature Review, conducted in two stages, with a selection of 44 publications, resulting in the time interval from 1964 to 2017, whose categories analyzed were: most productive authors, languages of publications, types of document, year of publication, most cited work, major impact journal, and thematic categories covered in the publications. Results: A total of nine more productive authors and co-authors were found; predominance of the English language (42 publications); works published in the format of journal articles (33); and highlight to the year 2007 (eight publications). In addition, it was identified that the most cited work was by Mcilwaine (1997), with 61 citations, and the journal Extensions & Corrections to the UDC was the one with the largest number of publications, in addition to the incidence of the theme Universal Automation linked to a thesaurus for information retrieval, present in 19 works. Conclusions: Shortage of studies that explore the potential of the Decimal Classification, especially in Brazilian literature, which highlights the need for further study on the topic, involving research at the national and international levels.
    Footnote
    Englischer Titel: Use of the Universal Decimal Classification for the recoery of information in digital environments: a systematic review of literature.
    Type
    a
  8. Pielmeier, S.; Voß, V.; Carstensen, H.; Kahl, B.: Online-Workshop "Computerunterstützte Inhaltserschließung" 2020 (2021) 0.05
    0.04701101 = product of:
      0.09402202 = sum of:
        0.09214076 = weight(_text_:da in 4409) [ClassicSimilarity], result of:
          0.09214076 = score(doc=4409,freq=4.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.44982868 = fieldWeight in 4409, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.046875 = fieldNorm(doc=4409)
        0.0018812611 = product of:
          0.0037625222 = sum of:
            0.0037625222 = weight(_text_:a in 4409) [ClassicSimilarity], result of:
              0.0037625222 = score(doc=4409,freq=2.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.07643694 = fieldWeight in 4409, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4409)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Zum ersten Mal in digitaler Form und mit 230 Teilnehmer*innen fand am 11. und 12. November 2020 der 4. Workshop "Computerunterstützte Inhaltserschließung" statt, organisiert von der Deutschen Nationalbibliothek (DNB), der Firma Eurospider Information Technology, der Staatsbibliothek zu Berlin - Preußischer Kulturbesitz (SBB), der UB Stuttgart und dem Bibliotheksservice-Zentrum Baden-Württemberg (BSZ). Im Mittelpunkt stand der "Digitale Assistent DA-3": In elf Vorträgen wurden Anwendungsszenarien und Erfahrungen mit dem System vorgestellt, das Bibliotheken und andere Wissenschafts- und Kultureinrichtungen bei der Inhaltserschließung unterstützen soll. Die Begrüßung und Einführung in die beiden Workshop-Tage übernahm Frank Scholze (Generaldirektor der DNB). Er sieht den DA-3 als Baustein für die Verzahnung der intellektuellen und der maschinellen Erschließung.
    Type
    a
  9. Mahn, J.: Wenn Maschinen philosophieren - wo bleibt da der Mensch? (2020) 0.05
    0.04701101 = product of:
      0.09402202 = sum of:
        0.09214076 = weight(_text_:da in 46) [ClassicSimilarity], result of:
          0.09214076 = score(doc=46,freq=4.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.44982868 = fieldWeight in 46, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.046875 = fieldNorm(doc=46)
        0.0018812611 = product of:
          0.0037625222 = sum of:
            0.0037625222 = weight(_text_:a in 46) [ClassicSimilarity], result of:
              0.0037625222 = score(doc=46,freq=2.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.07643694 = fieldWeight in 46, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=46)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    https://www.heise.de/news/heiseshow-Wenn-Maschinen-philosophieren-wo-bleibt-da-der-Mensch-4974474.html?view=print
    Type
    a
  10. Gabler, S.: Thesauri - a Toolbox for Information Retrieval (2023) 0.05
    0.045209236 = product of:
      0.09041847 = sum of:
        0.08687113 = weight(_text_:da in 114) [ClassicSimilarity], result of:
          0.08687113 = score(doc=114,freq=2.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.42410251 = fieldWeight in 114, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.0625 = fieldNorm(doc=114)
        0.00354734 = product of:
          0.00709468 = sum of:
            0.00709468 = weight(_text_:a in 114) [ClassicSimilarity], result of:
              0.00709468 = score(doc=114,freq=4.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.14413087 = fieldWeight in 114, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=114)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Thesauri sind etablierte Instrumente der bibliothekarischen Sacherschließung. Durch die jüngste technologische Entwicklung und das Aufkommen künstlicher Intelligenz haben sie an Bedeutung gewonnen, da sie in der Lage sind, erklärbare Ergebnisse für die computergestützte Erschließungs- und Konkordanzarbeit mit anderen Datensätzen und Modellen sowie für die Datenvalidierung zu liefern. Ausgehend von bestehenden eigenen Recherchen für eine Masterarbeit wird der Aspekt der Qualitätssicherung in Bibliothekskatalogen anhand ausgewählter Beispiele vertieft.
    Type
    a
  11. Seth, A.K.: Unsere inneren Universen (2020) 0.04
    0.04468974 = product of:
      0.08937948 = sum of:
        0.08687113 = weight(_text_:da in 5662) [ClassicSimilarity], result of:
          0.08687113 = score(doc=5662,freq=2.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.42410251 = fieldWeight in 5662, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.0625 = fieldNorm(doc=5662)
        0.0025083479 = product of:
          0.0050166957 = sum of:
            0.0050166957 = weight(_text_:a in 5662) [ClassicSimilarity], result of:
              0.0050166957 = score(doc=5662,freq=2.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.10191591 = fieldWeight in 5662, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5662)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Fortlaufend stellt unser Gehirn Vermutungen über die Welt da draußen an und gleicht Sinneseindrücke ab. Damit konstruiert es die Realität, die wir wahrnehmen, als eine Art kontrollierte Halluzination.
    Type
    a
  12. Uhlemann, S.; Hammer, A.: Retrokonversion von 1,2 Millionen Zettelkarten in 1,5 Jahren (2021) 0.04
    0.03955808 = product of:
      0.07911616 = sum of:
        0.07601224 = weight(_text_:da in 302) [ClassicSimilarity], result of:
          0.07601224 = score(doc=302,freq=2.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.3710897 = fieldWeight in 302, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.0546875 = fieldNorm(doc=302)
        0.0031039226 = product of:
          0.006207845 = sum of:
            0.006207845 = weight(_text_:a in 302) [ClassicSimilarity], result of:
              0.006207845 = score(doc=302,freq=4.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.12611452 = fieldWeight in 302, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=302)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Location
    Da
    Type
    a
  13. Womser-Hacker, C.: Informationswissenschaftliche Perspektiven des Information Retrieval (2023) 0.04
    0.03917584 = product of:
      0.07835168 = sum of:
        0.07678396 = weight(_text_:da in 798) [ClassicSimilarity], result of:
          0.07678396 = score(doc=798,freq=4.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.37485722 = fieldWeight in 798, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.0390625 = fieldNorm(doc=798)
        0.0015677174 = product of:
          0.0031354348 = sum of:
            0.0031354348 = weight(_text_:a in 798) [ClassicSimilarity], result of:
              0.0031354348 = score(doc=798,freq=2.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.06369744 = fieldWeight in 798, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=798)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Mit Information Retrieval (IR) sind in Forschung und Entwicklung in unterschiedlicher Breite und aus verschiedenen Perspektiven mehrere Disziplinen befasst. Die verschiedenen Ausrichtungen sind wichtig, da nur in ihrer Verknüpfung eine Gesamtschau des IR vermittelt werden kann. Die Informatik verfolgt einen stärker systemgetriebenen, technologischen Ansatz des IR und stellt Algorithmen und Implementationen in den Vordergrund, während für die Informationswissenschaft die Benutzer*innen in ihren vielschichtigen Kontexten den Schwerpunkt bilden. Deren Eigenschaften (fachlicher Hintergrund, Domänenzugehörigkeit, Expertise etc.) und Zielsetzungen, die durch das IR verfolgt werden, spielen im Interaktionsprozess zwischen Mensch und System eine zentrale Rolle. Auch wird intensiv der Frage nachgegangen, wie sich Benutzer*innen in diesen Prozessen verhalten und aus welchen Gründen sie verschiedene Systeme in Anspruch nehmen. Da ein Großteil des heutigen Wissens nach wie vor in Texten repräsentiert ist, ist eine weitere Disziplin - nämlich die Computerlinguistik/Sprachtechnologie für das IR von Bedeutung. Zusätzlich kommen aber auch visuelle und auditive Wissensobjekte immer stärker zum Tragen und werden aufgrund ihrer anwachsenden Menge immer wichtiger für das IR. Ein neues Fachgebiet ist die Data Science, die auf altbekannten Konzepten aus Statistik und Wahrscheinlichkeitsrechnung aufsetzt, auf den Daten operiert und auch traditionelles IR-Wissen für die Zusammenführung von strukturierten Fakten und unstrukturierten Texten nutzt. Hier soll die informationswissenschaftliche Perspektive im Vordergrund stehen.
    Type
    a
  14. Hubert, M.; Griesbaum, J.; Womser-Hacker, C.: Usability von Browsererweiterungen zum Schutz vor Tracking (2020) 0.04
    0.039103523 = product of:
      0.078207046 = sum of:
        0.07601224 = weight(_text_:da in 5866) [ClassicSimilarity], result of:
          0.07601224 = score(doc=5866,freq=2.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.3710897 = fieldWeight in 5866, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5866)
        0.0021948046 = product of:
          0.004389609 = sum of:
            0.004389609 = weight(_text_:a in 5866) [ClassicSimilarity], result of:
              0.004389609 = score(doc=5866,freq=2.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.089176424 = fieldWeight in 5866, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5866)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Browsererweiterungen zum Schutz vor Tracking stellen beliebte Werkzeuge zum Schutz der Privatsphäre von Nutzerinnen und Nutzern dar. Ihre tatsächliche Effektivität ist in hohem Maße von ihrer Usability abhängig, da diese bestimmt, in welchem Ausmaß diese Werkzeuge effektiv, effizient und zufriedenstellend genutzt werden können. Die vorliegende Untersuchung prüft die Gebrauchstauglichkeit vier solcher Browsererweiterungen mit Hilfe von Benutzertests. Die Ergebnisse zeigen, dass die Add-ons auch heutzutage noch eine Vielzahl an Usability-Mängeln aufweisen. Kernprobleme stellen insbesondere die mangelnde Verständlichkeit und die fehlende Führung und Unterstützung der Nutzenden dar.
    Type
    a
  15. Katzlberger, M.: GPT-3 - die erste allgemeine Künstliche Intelligenz? (2020) 0.04
    0.039103523 = product of:
      0.078207046 = sum of:
        0.07601224 = weight(_text_:da in 45) [ClassicSimilarity], result of:
          0.07601224 = score(doc=45,freq=2.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.3710897 = fieldWeight in 45, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.0546875 = fieldNorm(doc=45)
        0.0021948046 = product of:
          0.004389609 = sum of:
            0.004389609 = weight(_text_:a in 45) [ClassicSimilarity], result of:
              0.004389609 = score(doc=45,freq=2.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.089176424 = fieldWeight in 45, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=45)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    Vgl. auch: https://openai.com/blog/openai-api/. Vgl. auch: https://www.heise.de/hintergrund/GPT-3-Schockierend-guter-Sprachgenerator-4867089.html. Vgl. auch: https://www.heise.de/news/heiseshow-Wenn-Maschinen-philosophieren-wo-bleibt-da-der-Mensch-4974474.html?view=print.
    Type
    a
  16. Reimer, U.: Empfehlungssysteme (2023) 0.04
    0.039103523 = product of:
      0.078207046 = sum of:
        0.07601224 = weight(_text_:da in 519) [ClassicSimilarity], result of:
          0.07601224 = score(doc=519,freq=2.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.3710897 = fieldWeight in 519, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.0546875 = fieldNorm(doc=519)
        0.0021948046 = product of:
          0.004389609 = sum of:
            0.004389609 = weight(_text_:a in 519) [ClassicSimilarity], result of:
              0.004389609 = score(doc=519,freq=2.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.089176424 = fieldWeight in 519, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=519)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Mit der wachsenden Informationsflut steigen die Anforderungen an Informationssysteme, aus der Menge potenziell relevanter Information die in einem bestimmten Kontext relevanteste zu selektieren. Empfehlungssysteme spielen hier eine besondere Rolle, da sie personalisiert - d. h. kontextspezifisch und benutzerindividuell - relevante Information herausfiltern können. Definition: Ein Empfehlungssystem empfiehlt einem Benutzer bzw. einer Benutzerin in einem definierten Kontext aus einer gegebenen Menge von Empfehlungsobjekten eine Teilmenge als relevant. Empfehlungssysteme machen Benutzer auf Objekte aufmerksam, die sie möglicherweise nie gefunden hätten, weil sie nicht danach gesucht hätten oder sie in der schieren Menge an insgesamt relevanter Information untergegangen wären.
    Type
    a
  17. Struß, J.M.; Lewandowski, D.: Methoden in der Informationswissenschaft (2023) 0.04
    0.039103523 = product of:
      0.078207046 = sum of:
        0.07601224 = weight(_text_:da in 777) [ClassicSimilarity], result of:
          0.07601224 = score(doc=777,freq=2.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.3710897 = fieldWeight in 777, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.0546875 = fieldNorm(doc=777)
        0.0021948046 = product of:
          0.004389609 = sum of:
            0.004389609 = weight(_text_:a in 777) [ClassicSimilarity], result of:
              0.004389609 = score(doc=777,freq=2.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.089176424 = fieldWeight in 777, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=777)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Ohne Forschungsmethoden gibt es keinen wissenschaftlichen Erkenntnisgewinn. Methoden helfen dabei, zu möglichst gesicherten Erkenntnissen zu gelangen. Damit unterscheidet sich der wissenschaftliche Erkenntnisgewinn von anderen Arten der Produktion und Begründung von Wissen. Oft verlassen wir uns auf unseren gesunden Menschenverstand, auf die eigene Lebenserfahrung oder auf Autoritäten - alle diese Begründungen von Wissen haben jedoch gegenüber der wissenschaftlichen Produktion und Begründung von Wissen erhebliche Defizite. Die Verwendung wissenschaftlicher Methoden erlaubt uns, nachvollziehbare und für andere nachprüfbare Aussagen über Phänomene zu gewinnen. Der wissenschaftliche Diskurs beruht auf solchen Aussagen; damit ist die wissenschaftliche Diskussion grundsätzlich anders als Alltagsdiskussionen, da sie auf Erkenntnissen beruht, die zwar von unterschiedlichen Personen in ihrer Bedeutung unterschiedlich eingeschätzt werden können, jedoch in ihrer Faktizität von allen akzeptiert werden.
    Type
    a
  18. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.04
    0.03600504 = product of:
      0.07201008 = sum of:
        0.06780345 = product of:
          0.20341034 = sum of:
            0.20341034 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.20341034 = score(doc=862,freq=2.0), product of:
                0.3619285 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04269026 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.004206628 = product of:
          0.008413256 = sum of:
            0.008413256 = weight(_text_:a in 862) [ClassicSimilarity], result of:
              0.008413256 = score(doc=862,freq=10.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.1709182 = fieldWeight in 862, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
    Type
    a
  19. Rodrigues Barbosa, E.; Godoy Viera, A.F.: Relações semânticas e interoperabilidade em tesauros representados em SKOS : uma revisao sistematica da literatura (2022) 0.03
    0.034679987 = product of:
      0.06935997 = sum of:
        0.065153345 = weight(_text_:da in 254) [ClassicSimilarity], result of:
          0.065153345 = score(doc=254,freq=2.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.31807688 = fieldWeight in 254, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.046875 = fieldNorm(doc=254)
        0.004206628 = product of:
          0.008413256 = sum of:
            0.008413256 = weight(_text_:a in 254) [ClassicSimilarity], result of:
              0.008413256 = score(doc=254,freq=10.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.1709182 = fieldWeight in 254, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=254)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Objetivo: Este estudo tem como objetivo compreender como o modelo de dados Simple Knowledge Organization System e seus modelos de extensão tem sido utilizado para promover a interoperabilidade com outros vocabulários e refinar as relações semânticas em tesauros na web. Metodologia: Utiliza a pesquisa documental nos guias de referência dos modelos de dados utilizados para representar os tesauros na web. Resultados: os modelos de dados têm sido utilizados para representar os termos e suas variações linguísticas, os relacionamentos entre grupos e subgrupos de conceitos, numa perspectiva intra-vocabulários, e os relacionamentos entre conceitos de vocabulários distintos, numa perspectiva inter-vocabulários. Conclusões: O uso do Simple Knowledge Organization System, e dos seus modelos de extensão contribuem para uma melhor estruturação dos conceitos em tesauros. Os modelos de extensão são apropriados para a representação dos relacionamentos de equivalência compostos, ou para a estruturação de grupos e subgrupos de conceitos em tesauros.
    Type
    a
  20. Soares-Silva, D.; Salati Marcondes de Moraes, G.H.; Cappellozza, A.; Morini, C.: Explaining library user loyalty through perceived service quality : what is wrong? (2020) 0.03
    0.033910878 = product of:
      0.067821756 = sum of:
        0.063981645 = product of:
          0.12796329 = sum of:
            0.12796329 = weight(_text_:silva in 5951) [ClassicSimilarity], result of:
              0.12796329 = score(doc=5951,freq=2.0), product of:
                0.31446302 = queryWeight, product of:
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.04269026 = queryNorm
                0.40692633 = fieldWeight in 5951, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5951)
          0.5 = coord(1/2)
        0.0038401082 = product of:
          0.0076802163 = sum of:
            0.0076802163 = weight(_text_:a in 5951) [ClassicSimilarity], result of:
              0.0076802163 = score(doc=5951,freq=12.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.15602624 = fieldWeight in 5951, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5951)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This study validates the adaptation of a loyalty scale for the library scenario and recovers the hierarchical nature of the perceived service quality (PSQ) by operationalizing it as a second-order level construct, composed by the determinants of service quality (DSQ) identified by Parasuraman, Zeithaml, and Berry in 1985. Our hypothesis was that DSQ are distinct and complementary dimensions, in opposition to the overlapping of DSQ proposed in the SERVQUAL and LibQUAL+® models. In addition, the influence of PSQ on user loyalty (UL) was investigated. Using structural equation modeling, we analyzed the survey data of 1,028 users of a network of academic libraries and report 2 main findings. First, it was shown that the 10 DSQ are statistically significant for the evaluation of PSQ. Second, we demonstrated the positive effect of PSQ for UL. The model presented may be used as a diagnostic and benchmarking tool for managers, coordinators, and librarians who seek to evaluate and/or assess the quality of the services offered by their libraries, as well as to identify and/or manage the loyalty level of their users.
    Type
    a

Languages

  • e 820
  • d 377
  • pt 6
  • sp 1
  • More… Less…

Types

  • a 1148
  • el 249
  • m 25
  • p 13
  • s 6
  • x 2
  • A 1
  • EL 1
  • r 1
  • More… Less…

Themes

Subjects

Classifications