Search (183 results, page 1 of 10)

  • × year_i:[2020 TO 2030}
  1. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.19
    0.18851073 = product of:
      0.37702146 = sum of:
        0.036097497 = product of:
          0.10829248 = sum of:
            0.10829248 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.10829248 = score(doc=1000,freq=2.0), product of:
                0.23122206 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.02727315 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.0160465 = weight(_text_:web in 1000) [ClassicSimilarity], result of:
          0.0160465 = score(doc=1000,freq=2.0), product of:
            0.08900621 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02727315 = queryNorm
            0.18028519 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.10829248 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.10829248 = score(doc=1000,freq=2.0), product of:
            0.23122206 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02727315 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.10829248 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.10829248 = score(doc=1000,freq=2.0), product of:
            0.23122206 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02727315 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.10829248 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.10829248 = score(doc=1000,freq=2.0), product of:
            0.23122206 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02727315 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.5 = coord(5/10)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.17
    0.17326796 = product of:
      0.4331699 = sum of:
        0.04331699 = product of:
          0.12995097 = sum of:
            0.12995097 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.12995097 = score(doc=862,freq=2.0), product of:
                0.23122206 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.02727315 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.12995097 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.12995097 = score(doc=862,freq=2.0), product of:
            0.23122206 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02727315 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.12995097 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.12995097 = score(doc=862,freq=2.0), product of:
            0.23122206 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02727315 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.12995097 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.12995097 = score(doc=862,freq=2.0), product of:
            0.23122206 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02727315 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.4 = coord(4/10)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  3. Rölke, H.; Weichselbraun, A.: Ontologien und Linked Open Data (2023) 0.02
    0.024068877 = product of:
      0.120344386 = sum of:
        0.0160465 = weight(_text_:web in 788) [ClassicSimilarity], result of:
          0.0160465 = score(doc=788,freq=2.0), product of:
            0.08900621 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02727315 = queryNorm
            0.18028519 = fieldWeight in 788, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=788)
        0.104297884 = weight(_text_:ontologie in 788) [ClassicSimilarity], result of:
          0.104297884 = score(doc=788,freq=4.0), product of:
            0.19081406 = queryWeight, product of:
              6.996407 = idf(docFreq=109, maxDocs=44218)
              0.02727315 = queryNorm
            0.5465943 = fieldWeight in 788, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.996407 = idf(docFreq=109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=788)
      0.2 = coord(2/10)
    
    Abstract
    Der Begriff Ontologie stammt ursprünglich aus der Metaphysik, einem Teilbereich der Philosophie, welcher sich um die Erkenntnis der Grundstruktur und Prinzipien der Wirklichkeit bemüht. Ontologien befassen sich dabei mit der Frage, welche Dinge auf der fundamentalsten Ebene existieren, wie sich diese strukturieren lassen und in welchen Beziehungen diese zueinanderstehen. In der Informationswissenschaft hingegen werden Ontologien verwendet, um das Vokabular für die Beschreibung von Wissensbereichen zu formalisieren. Ziel ist es, dass alle Akteure, die in diesen Bereichen tätig sind, die gleichen Konzepte und Begrifflichkeiten verwenden, um eine reibungslose Zusammenarbeit ohne Missverständnisse zu ermöglichen. So definierte zum Beispiel die Dublin Core Metadaten Initiative 15 Kernelemente, die zur Beschreibung von elektronischen Ressourcen und Medien verwendet werden können. Jedes Element wird durch eine eindeutige Bezeichnung (zum Beispiel identifier) und eine zugehörige Konzeption, welche die Bedeutung dieser Bezeichnung möglichst exakt festlegt, beschrieben. Ein Identifier muss zum Beispiel laut der Dublin Core Ontologie ein Dokument basierend auf einem zugehörigen Katalog eindeutig identifizieren. Je nach Katalog kämen daher zum Beispiel eine ISBN (Katalog von Büchern), ISSN (Katalog von Zeitschriften), URL (Web), DOI (Publikationsdatenbank) etc. als Identifier in Frage.
  4. Pintscher, L.; Bourgonje, P.; Moreno Schneider, J.; Ostendorff, M.; Rehm, G.: Wissensbasen für die automatische Erschließung und ihre Qualität am Beispiel von Wikidata : die Inhaltserschließungspolitik der Deutschen Nationalbibliothek (2021) 0.02
    0.017959248 = product of:
      0.08979624 = sum of:
        0.0160465 = weight(_text_:web in 366) [ClassicSimilarity], result of:
          0.0160465 = score(doc=366,freq=2.0), product of:
            0.08900621 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02727315 = queryNorm
            0.18028519 = fieldWeight in 366, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=366)
        0.073749736 = weight(_text_:ontologie in 366) [ClassicSimilarity], result of:
          0.073749736 = score(doc=366,freq=2.0), product of:
            0.19081406 = queryWeight, product of:
              6.996407 = idf(docFreq=109, maxDocs=44218)
              0.02727315 = queryNorm
            0.38650054 = fieldWeight in 366, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.996407 = idf(docFreq=109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=366)
      0.2 = coord(2/10)
    
    Abstract
    Wikidata ist eine freie Wissensbasis, die allgemeine Daten über die Welt zur Verfügung stellt. Sie wird von Wikimedia entwickelt und betrieben, wie auch das Schwesterprojekt Wikipedia. Die Daten in Wikidata werden von einer großen Community von Freiwilligen gesammelt und gepflegt, wobei die Daten sowie die zugrundeliegende Ontologie von vielen Projekten, Institutionen und Firmen als Basis für Applikationen und Visualisierungen, aber auch für das Training von maschinellen Lernverfahren genutzt werden. Wikidata nutzt MediaWiki und die Erweiterung Wikibase als technische Grundlage der kollaborativen Arbeit an einer Wissensbasis, die verlinkte offene Daten für Menschen und Maschinen zugänglich macht. Ende 2020 beschreibt Wikidata über 90 Millionen Entitäten unter Verwendung von über 8 000 Eigenschaften, womit insgesamt mehr als 1,15 Milliarden Aussagen über die beschriebenen Entitäten getroffen werden. Die Datenobjekte dieser Entitäten sind mit äquivalenten Einträgen in mehr als 5 500 externen Datenbanken, Katalogen und Webseiten verknüpft, was Wikidata zu einem der zentralen Knotenpunkte des Linked Data Web macht. Mehr als 11 500 aktiv Editierende tragen neue Daten in die Wissensbasis ein und pflegen sie. Diese sind in Wiki-Projekten organisiert, die jeweils bestimmte Themenbereiche oder Aufgabengebiete adressieren. Die Daten werden in mehr als der Hälfte der Inhaltsseiten in den Wikimedia-Projekten genutzt und unter anderem mehr als 6,5 Millionen Mal am Tag über den SPARQL-Endpoint abgefragt, um sie in externe Applikationen und Visualisierungen einzubinden.
  5. Asubiaro, T.V.; Onaolapo, S.: ¬A comparative study of the coverage of African journals in Web of Science, Scopus, and CrossRef (2023) 0.01
    0.00909286 = product of:
      0.0454643 = sum of:
        0.03930574 = weight(_text_:web in 992) [ClassicSimilarity], result of:
          0.03930574 = score(doc=992,freq=12.0), product of:
            0.08900621 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02727315 = queryNorm
            0.4416067 = fieldWeight in 992, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=992)
        0.0061585587 = product of:
          0.018475676 = sum of:
            0.018475676 = weight(_text_:22 in 992) [ClassicSimilarity], result of:
              0.018475676 = score(doc=992,freq=2.0), product of:
                0.09550592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02727315 = queryNorm
                0.19345059 = fieldWeight in 992, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=992)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    This is the first study that evaluated the coverage of journals from Africa in Web of Science, Scopus, and CrossRef. A list of active journals published in each of the 55 African countries was compiled from Ulrich's periodicals directory and African Journals Online (AJOL) website. Journal master lists for Web of Science, Scopus, and CrossRef were searched for the African journals. A total of 2,229 unique active African journals were identified from Ulrich (N = 2,117, 95.0%) and AJOL (N = 243, 10.9%) after removing duplicates. The volume of African journals in Web of Science and Scopus databases is 7.4% (N = 166) and 7.8% (N = 174), respectively, compared to the 45.6% (N = 1,017) covered in CrossRef. While making up only 17.% of all the African journals, South African journals had the best coverage in the two most authoritative databases, accounting for 73.5% and 62.1% of all the African journals in Web of Science and Scopus, respectively. In contrast, Nigeria published 44.5% of all the African journals. The distribution of the African journals is biased in favor of Medical, Life and Health Sciences and Humanities and the Arts in the three databases. The low representation of African journals in CrossRef, a free indexing infrastructure that could be harnessed for building an African-centric research indexing database, is concerning.
    Date
    22. 6.2023 14:09:06
    Object
    Web of Science
  6. Tay, A.: ¬The next generation discovery citation indexes : a review of the landscape in 2020 (2020) 0.01
    0.008078488 = product of:
      0.040392436 = sum of:
        0.031770453 = weight(_text_:web in 40) [ClassicSimilarity], result of:
          0.031770453 = score(doc=40,freq=4.0), product of:
            0.08900621 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02727315 = queryNorm
            0.35694647 = fieldWeight in 40, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=40)
        0.008621982 = product of:
          0.025865946 = sum of:
            0.025865946 = weight(_text_:22 in 40) [ClassicSimilarity], result of:
              0.025865946 = score(doc=40,freq=2.0), product of:
                0.09550592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02727315 = queryNorm
                0.2708308 = fieldWeight in 40, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=40)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    Conclusion There is a reason why Google Scholar and Web of Science/Scopus are kings of the hills in their various arenas. They have strong brand recogniton, a head start in development and a mass of eyeballs and users that leads to an almost virtious cycle of improvement. Competing against such well established competitors is not easy even when one has deep pockets (Microsoft) or a killer idea (scite). It will be interesting to see how the landscape will look like in 2030. Stay tuned for part II where I review each particular index.
    Date
    17.11.2020 12:22:59
    Object
    Web of Science
  7. Singh, A.; Sinha, U.; Sharma, D.k.: Semantic Web and data visualization (2020) 0.01
    0.005446363 = product of:
      0.05446363 = sum of:
        0.05446363 = weight(_text_:web in 79) [ClassicSimilarity], result of:
          0.05446363 = score(doc=79,freq=36.0), product of:
            0.08900621 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02727315 = queryNorm
            0.6119082 = fieldWeight in 79, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=79)
      0.1 = coord(1/10)
    
    Abstract
    With the terrific growth of data volume and data being produced every second on millions of devices across the globe, there is a desperate need to manage the unstructured data available on web pages efficiently. Semantic Web or also known as Web of Trust structures the scattered data on the Internet according to the needs of the user. It is an extension of the World Wide Web (WWW) which focuses on manipulating web data on behalf of Humans. Due to the ability of the Semantic Web to integrate data from disparate sources and hence makes it more user-friendly, it is an emerging trend. Tim Berners-Lee first introduced the term Semantic Web and since then it has come a long way to become a more intelligent and intuitive web. Data Visualization plays an essential role in explaining complex concepts in a universal manner through pictorial representation, and the Semantic Web helps in broadening the potential of Data Visualization and thus making it an appropriate combination. The objective of this chapter is to provide fundamental insights concerning the semantic web technologies and in addition to that it also elucidates the issues as well as the solutions regarding the semantic web. The purpose of this chapter is to highlight the semantic web architecture in detail while also comparing it with the traditional search system. It classifies the semantic web architecture into three major pillars i.e. RDF, Ontology, and XML. Moreover, it describes different semantic web tools used in the framework and technology. It attempts to illustrate different approaches of the semantic web search engines. Besides stating numerous challenges faced by the semantic web it also illustrates the solutions.
    Theme
    Semantic Web
  8. Zheng, X.; Chen, J.; Yan, E.; Ni, C.: Gender and country biases in Wikipedia citations to scholarly publications (2023) 0.01
    0.005329214 = product of:
      0.02664607 = sum of:
        0.0192558 = weight(_text_:web in 886) [ClassicSimilarity], result of:
          0.0192558 = score(doc=886,freq=2.0), product of:
            0.08900621 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02727315 = queryNorm
            0.21634221 = fieldWeight in 886, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=886)
        0.00739027 = product of:
          0.02217081 = sum of:
            0.02217081 = weight(_text_:22 in 886) [ClassicSimilarity], result of:
              0.02217081 = score(doc=886,freq=2.0), product of:
                0.09550592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02727315 = queryNorm
                0.23214069 = fieldWeight in 886, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=886)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    Ensuring Wikipedia cites scholarly publications based on quality and relevancy without biases is critical to credible and fair knowledge dissemination. We investigate gender- and country-based biases in Wikipedia citation practices using linked data from the Web of Science and a Wikipedia citation dataset. Using coarsened exact matching, we show that publications by women are cited less by Wikipedia than expected, and publications by women are less likely to be cited than those by men. Scholarly publications by authors affiliated with non-Anglosphere countries are also disadvantaged in getting cited by Wikipedia, compared with those by authors affiliated with Anglosphere countries. The level of gender- or country-based inequalities varies by research field, and the gender-country intersectional bias is prominent in math-intensive STEM fields. To ensure the credibility and equality of knowledge presentation, Wikipedia should consider strategies and guidelines to cite scholarly publications independent of the gender and country of authors.
    Date
    22. 1.2023 18:53:32
  9. Wang, H.; Song, Y.-Q.; Wang, L.-T.: Memory model for web ad effect based on multimodal features (2020) 0.00
    0.004538636 = product of:
      0.04538636 = sum of:
        0.04538636 = weight(_text_:web in 5512) [ClassicSimilarity], result of:
          0.04538636 = score(doc=5512,freq=16.0), product of:
            0.08900621 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02727315 = queryNorm
            0.5099235 = fieldWeight in 5512, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5512)
      0.1 = coord(1/10)
    
    Abstract
    Web ad effect evaluation is a challenging problem in web marketing research. Although the analysis of web ad effectiveness has achieved excellent results, there are still some deficiencies. First, there is a lack of an in-depth study of the relevance between advertisements and web content. Second, there is not a thorough analysis of the impacts of users and advertising features on user browsing behaviors. And last, the evaluation index of the web advertisement effect is not adequate. Given the above problems, we conducted our work by studying the observer's behavioral pattern based on multimodal features. First, we analyze the correlation between ads and links with different searching results and further assess the influence of relevance on the observer's attention to web ads using eye-movement features. Then we investigate the user's behavioral sequence and propose the directional frequent-browsing pattern algorithm for mining the user's most commonly used browsing patterns. Finally, we offer the novel use of "memory" as a new measure of advertising effectiveness and further build an advertising memory model with integrated multimodal features for predicting the efficacy of web ads. A large number of experiments have proved the superiority of our method.
  10. Marcondes, C.H.: Towards a vocabulary to implement culturally relevant relationships between digital collections in heritage institutions (2020) 0.00
    0.0044410117 = product of:
      0.022205058 = sum of:
        0.0160465 = weight(_text_:web in 5757) [ClassicSimilarity], result of:
          0.0160465 = score(doc=5757,freq=2.0), product of:
            0.08900621 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02727315 = queryNorm
            0.18028519 = fieldWeight in 5757, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5757)
        0.0061585587 = product of:
          0.018475676 = sum of:
            0.018475676 = weight(_text_:22 in 5757) [ClassicSimilarity], result of:
              0.018475676 = score(doc=5757,freq=2.0), product of:
                0.09550592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02727315 = queryNorm
                0.19345059 = fieldWeight in 5757, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5757)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    Cultural heritage institutions are publishing their digital collections over the web as LOD. This is is a new step in the patrimonialization and curatorial processes developed by such institutions. Many of these collections are thematically superimposed and complementary. Frequently, objects in these collections present culturally relevant relationships, such as a book about a painting, or a draft or sketch of a famous painting, etc. LOD technology enables such heritage records to be interlinked, achieving interoperability and adding value to digital collections, thus empowering heritage institutions. An aim of this research is characterizing such culturally relevant relationships and organizing them in a vocabulary. Use cases or examples of relationships between objects suggested by curators or mentioned in literature and in the conceptual models as FRBR/LRM, CIDOC CRM and RiC-CM, were collected and used as examples or inspiration of cultural relevant relationships. Relationships identified are collated and compared for identifying those with the same or similar meaning, synthesized and normalized. A set of thirty-three culturally relevant relationships are identified and formalized as a LOD property vocabulary to be used by digital curators to interlink digital collections. The results presented are provisional and a starting point to be discussed, tested, and enhanced.
    Date
    4. 3.2020 14:22:41
  11. Ekstrand, M.D.; Wright, K.L.; Pera, M.S.: Enhancing classroom instruction with online news (2020) 0.00
    0.0044410117 = product of:
      0.022205058 = sum of:
        0.0160465 = weight(_text_:web in 5844) [ClassicSimilarity], result of:
          0.0160465 = score(doc=5844,freq=2.0), product of:
            0.08900621 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02727315 = queryNorm
            0.18028519 = fieldWeight in 5844, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5844)
        0.0061585587 = product of:
          0.018475676 = sum of:
            0.018475676 = weight(_text_:22 in 5844) [ClassicSimilarity], result of:
              0.018475676 = score(doc=5844,freq=2.0), product of:
                0.09550592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02727315 = queryNorm
                0.19345059 = fieldWeight in 5844, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5844)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    Purpose This paper investigates how school teachers look for informational texts for their classrooms. Access to current, varied and authentic informational texts improves learning outcomes for K-12 students, but many teachers lack resources to expand and update readings. The Web offers freely available resources, but finding suitable ones is time-consuming. This research lays the groundwork for building tools to ease that burden. Design/methodology/approach This paper reports qualitative findings from a study in two stages: (1) a set of semistructured interviews, based on the critical incident technique, eliciting teachers' information-seeking practices and challenges; and (2) observations of teachers using a prototype teaching-oriented news search tool under a think-aloud protocol. Findings Teachers articulated different objectives and ways of using readings in their classrooms, goals and self-reported practices varied by experience level. Teachers struggled to formulate queries that are likely to return readings on specific course topics, instead searching directly for abstract topics. Experience differences did not translate into observable differences in search skill or success in the lab study. Originality/value There is limited work on teachers' information-seeking practices, particularly on how teachers look for texts for classroom use. This paper describes how teachers look for information in this context, setting the stage for future development and research on how to support this use case. Understanding and supporting teachers looking for information is a rich area for future research, due to the complexity of the information need and the fact that teachers are not looking for information for themselves.
    Date
    20. 1.2015 18:30:22
  12. Kang, M.: Dual paths to continuous online knowledge sharing : a repetitive behavior perspective (2020) 0.00
    0.0044410117 = product of:
      0.022205058 = sum of:
        0.0160465 = weight(_text_:web in 5985) [ClassicSimilarity], result of:
          0.0160465 = score(doc=5985,freq=2.0), product of:
            0.08900621 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02727315 = queryNorm
            0.18028519 = fieldWeight in 5985, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5985)
        0.0061585587 = product of:
          0.018475676 = sum of:
            0.018475676 = weight(_text_:22 in 5985) [ClassicSimilarity], result of:
              0.018475676 = score(doc=5985,freq=2.0), product of:
                0.09550592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02727315 = queryNorm
                0.19345059 = fieldWeight in 5985, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5985)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    Purpose Continuous knowledge sharing by active users, who are highly active in answering questions, is crucial to the sustenance of social question-and-answer (Q&A) sites. The purpose of this paper is to examine such knowledge sharing considering reason-based elaborate decision and habit-based automated cognitive processes. Design/methodology/approach To verify the research hypotheses, survey data on subjective intentions and web-crawled data on objective behavior are utilized. The sample size is 337 with the response rate of 27.2 percent. Negative binomial and hierarchical linear regressions are used given the skewed distribution of the dependent variable (i.e. the number of answers). Findings Both elaborate decision (linking satisfaction, intentions and continuance behavior) and automated cognitive processes (linking past and continuance behavior) are significant and substitutable. Research limitations/implications By measuring both subjective intentions and objective behavior, it verifies a detailed mechanism linking continuance intentions, past behavior and continuous knowledge sharing. The significant influence of automated cognitive processes implies that online knowledge sharing is habitual for active users. Practical implications Understanding that online knowledge sharing is habitual is imperative to maintaining continuous knowledge sharing by active users. Knowledge sharing trends should be monitored to check if the frequency of sharing decreases. Social Q&A sites should intervene to restore knowledge sharing behavior through personalized incentives. Originality/value This is the first study utilizing both subjective intentions and objective behavior data in the context of online knowledge sharing. It also introduces habit-based automated cognitive processes to this context. This approach extends the current understanding of continuous online knowledge sharing behavior.
    Date
    20. 1.2015 18:30:22
  13. Jia, J.: From data to knowledge : the relationships between vocabularies, linked data and knowledge graphs (2021) 0.00
    0.0044410117 = product of:
      0.022205058 = sum of:
        0.0160465 = weight(_text_:web in 106) [ClassicSimilarity], result of:
          0.0160465 = score(doc=106,freq=2.0), product of:
            0.08900621 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02727315 = queryNorm
            0.18028519 = fieldWeight in 106, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=106)
        0.0061585587 = product of:
          0.018475676 = sum of:
            0.018475676 = weight(_text_:22 in 106) [ClassicSimilarity], result of:
              0.018475676 = score(doc=106,freq=2.0), product of:
                0.09550592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02727315 = queryNorm
                0.19345059 = fieldWeight in 106, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=106)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    Purpose The purpose of this paper is to identify the concepts, component parts and relationships between vocabularies, linked data and knowledge graphs (KGs) from the perspectives of data and knowledge transitions. Design/methodology/approach This paper uses conceptual analysis methods. This study focuses on distinguishing concepts and analyzing composition and intercorrelations to explore data and knowledge transitions. Findings Vocabularies are the cornerstone for accurately building understanding of the meaning of data. Vocabularies provide for a data-sharing model and play an important role in supporting the semantic expression of linked data and defining the schema layer; they are also used for entity recognition, alignment and linkage for KGs. KGs, which consist of a schema layer and a data layer, are presented as cubes that organically combine vocabularies, linked data and big data. Originality/value This paper first describes the composition of vocabularies, linked data and KGs. More importantly, this paper innovatively analyzes and summarizes the interrelatedness of these factors, which comes from frequent interactions between data and knowledge. The three factors empower each other and can ultimately empower the Semantic Web.
    Date
    22. 1.2021 14:24:32
  14. Thelwall, M.; Thelwall, S.: ¬A thematic analysis of highly retweeted early COVID-19 tweets : consensus, information, dissent and lockdown life (2020) 0.00
    0.0044410117 = product of:
      0.022205058 = sum of:
        0.0160465 = weight(_text_:web in 178) [ClassicSimilarity], result of:
          0.0160465 = score(doc=178,freq=2.0), product of:
            0.08900621 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02727315 = queryNorm
            0.18028519 = fieldWeight in 178, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=178)
        0.0061585587 = product of:
          0.018475676 = sum of:
            0.018475676 = weight(_text_:22 in 178) [ClassicSimilarity], result of:
              0.018475676 = score(doc=178,freq=2.0), product of:
                0.09550592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02727315 = queryNorm
                0.19345059 = fieldWeight in 178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=178)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    Purpose Public attitudes towards COVID-19 and social distancing are critical in reducing its spread. It is therefore important to understand public reactions and information dissemination in all major forms, including on social media. This article investigates important issues reflected on Twitter in the early stages of the public reaction to COVID-19. Design/methodology/approach A thematic analysis of the most retweeted English-language tweets mentioning COVID-19 during March 10-29, 2020. Findings The main themes identified for the 87 qualifying tweets accounting for 14 million retweets were: lockdown life; attitude towards social restrictions; politics; safety messages; people with COVID-19; support for key workers; work; and COVID-19 facts/news. Research limitations/implications Twitter played many positive roles, mainly through unofficial tweets. Users shared social distancing information, helped build support for social distancing, criticised government responses, expressed support for key workers and helped each other cope with social isolation. A few popular tweets not supporting social distancing show that government messages sometimes failed. Practical implications Public health campaigns in future may consider encouraging grass roots social web activity to support campaign goals. At a methodological level, analysing retweet counts emphasised politics and ignored practical implementation issues. Originality/value This is the first qualitative analysis of general COVID-19-related retweeting.
    Date
    20. 1.2015 18:30:22
  15. Hoeber, O.; Harvey, M.; Dewan Sagar, S.A.; Pointon, M.: ¬The effects of simulated interruptions on mobile search tasks (2022) 0.00
    0.0044410117 = product of:
      0.022205058 = sum of:
        0.0160465 = weight(_text_:web in 563) [ClassicSimilarity], result of:
          0.0160465 = score(doc=563,freq=2.0), product of:
            0.08900621 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02727315 = queryNorm
            0.18028519 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=563)
        0.0061585587 = product of:
          0.018475676 = sum of:
            0.018475676 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.018475676 = score(doc=563,freq=2.0), product of:
                0.09550592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02727315 = queryNorm
                0.19345059 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=563)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    While it is clear that using a mobile device can interrupt real-world activities such as walking or driving, the effects of interruptions on mobile device use have been under-studied. We are particularly interested in how the ambient distraction of walking while using a mobile device, combined with the occurrence of simulated interruptions of different levels of cognitive complexity, affect web search activities. We have established an experimental design to study how the degree of cognitive complexity of simulated interruptions influences both objective and subjective search task performance. In a controlled laboratory study (n = 27), quantitative and qualitative data were collected on mobile search performance, perceptions of the interruptions, and how participants reacted to the interruptions, using a custom mobile eye-tracking app, a questionnaire, and observations. As expected, more cognitively complex interruptions resulted in increased overall task completion times and higher perceived impacts. Interestingly, the effect on the resumption lag or the actual search performance was not significant, showing the resiliency of people to resume their tasks after an interruption. Implications from this study enhance our understanding of how interruptions objectively and subjectively affect search task performance, motivating the need for providing explicit mobile search support to enable recovery from interruptions.
    Date
    3. 5.2022 13:22:33
  16. Zhang, Y.; Liu, J.; Song, S.: ¬The design and evaluation of a nudge-based interface to facilitate consumers' evaluation of online health information credibility (2023) 0.00
    0.0044410117 = product of:
      0.022205058 = sum of:
        0.0160465 = weight(_text_:web in 993) [ClassicSimilarity], result of:
          0.0160465 = score(doc=993,freq=2.0), product of:
            0.08900621 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02727315 = queryNorm
            0.18028519 = fieldWeight in 993, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=993)
        0.0061585587 = product of:
          0.018475676 = sum of:
            0.018475676 = weight(_text_:22 in 993) [ClassicSimilarity], result of:
              0.018475676 = score(doc=993,freq=2.0), product of:
                0.09550592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02727315 = queryNorm
                0.19345059 = fieldWeight in 993, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=993)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    Evaluating the quality of online health information (OHI) is a major challenge facing consumers. We designed PageGraph, an interface that displays quality indicators and associated values for a webpage, based on credibility evaluation models, the nudge theory, and existing empirical research concerning professionals' and consumers' evaluation of OHI quality. A qualitative evaluation of the interface with 16 participants revealed that PageGraph rendered the information and presentation nudges as intended. It provided the participants with easier access to quality indicators, encouraged fresh angles to assess information credibility, provided an evaluation framework, and encouraged validation of initial judgments. We then conducted a quantitative evaluation of the interface involving 60 participants using a between-subject experimental design. The control group used a regular web browser and evaluated the credibility of 12 preselected webpages, whereas the experimental group evaluated the same webpages with the assistance of PageGraph. PageGraph did not significantly influence participants' evaluation results. The results may be attributed to the insufficiency of the saliency and structure of the nudges implemented and the webpage stimuli's lack of sensitivity to the intervention. Future directions for applying nudges to support OHI evaluation were discussed.
    Date
    22. 6.2023 18:18:34
  17. Ostani, M.M.; Sohrabi, M.C.; Taheri, S.M.; Asemi, A.: Localization of Schema.org for manuscript description in the Iranian-Islamic information context (2021) 0.00
    0.003930574 = product of:
      0.03930574 = sum of:
        0.03930574 = weight(_text_:web in 585) [ClassicSimilarity], result of:
          0.03930574 = score(doc=585,freq=12.0), product of:
            0.08900621 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02727315 = queryNorm
            0.4416067 = fieldWeight in 585, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=585)
      0.1 = coord(1/10)
    
    Abstract
    This study aims to assess the localization of Schema.org for manuscript description in the Iranian-Islamic information context using documentary and qualitative content analysis. The schema.org introduces schemas for different Web content objects so as to generate structured data. Given that the structure of Schema.org is ontological, the inheritance of the manuscript types from the properties of their parent types, as well as the localization and description of the specific properties of the manuscripts in the Iranian-Islamic information context were investigated in order to improve their indexability and semantic visibility in the Web search engines. The proposed properties specific to the manuscript type and the six proposed properties to be added to the "CreativeWork" type are found to be consistent with other schema properties. In turn, these properties lead to the localization of the existing schema for the manuscript type compatibility with the Iranian-Islamic information context. This schema is also applicable to centers with published records on the Web, and if markup with these properties, their indexability and semantic visibility in Web search engines increases accordingly. The generation of structured data in the Web environment through this schema is deemed to promote the concept of the Semantic Web, and make data and knowledge retrieval easier.
  18. Frey, J.; Streitmatter, D.; Götz, F.; Hellmann, S.; Arndt, N.: DBpedia Archivo : a Web-Scale interface for ontology archiving under consumer-oriented aspects (2020) 0.00
    0.00389107 = product of:
      0.0389107 = sum of:
        0.0389107 = weight(_text_:web in 52) [ClassicSimilarity], result of:
          0.0389107 = score(doc=52,freq=6.0), product of:
            0.08900621 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02727315 = queryNorm
            0.43716836 = fieldWeight in 52, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=52)
      0.1 = coord(1/10)
    
    Abstract
    While thousands of ontologies exist on the web, a unified sys-tem for handling online ontologies - in particular with respect to discov-ery, versioning, access, quality-control, mappings - has not yet surfacedand users of ontologies struggle with many challenges. In this paper, wepresent an online ontology interface and augmented archive called DB-pedia Archivo, that discovers, crawls, versions and archives ontologies onthe DBpedia Databus. Based on this versioned crawl, different features,quality measures and, if possible, fixes are deployed to handle and sta-bilize the changes in the found ontologies at web-scale. A comparison toexisting approaches and ontology repositories is given.
  19. Peters, I.: Folksonomies & Social Tagging (2023) 0.00
    0.00389107 = product of:
      0.0389107 = sum of:
        0.0389107 = weight(_text_:web in 796) [ClassicSimilarity], result of:
          0.0389107 = score(doc=796,freq=6.0), product of:
            0.08900621 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02727315 = queryNorm
            0.43716836 = fieldWeight in 796, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=796)
      0.1 = coord(1/10)
    
    Abstract
    Die Erforschung und der Einsatz von Folksonomies und Social Tagging als nutzerzentrierte Formen der Inhaltserschließung und Wissensrepräsentation haben in den 10 Jahren ab ca. 2005 ihren Höhenpunkt erfahren. Motiviert wurde dies durch die Entwicklung und Verbreitung des Social Web und der wachsenden Nutzung von Social-Media-Plattformen (s. Kapitel E 8 Social Media und Social Web). Beides führte zu einem rasanten Anstieg der im oder über das World Wide Web auffindbaren Menge an potenzieller Information und generierte eine große Nachfrage nach skalierbaren Methoden der Inhaltserschließung.
  20. Scholz, M.: Wie können Daten im Web mit JSON nachgenutzt werden? (2023) 0.00
    0.0036309087 = product of:
      0.036309086 = sum of:
        0.036309086 = weight(_text_:web in 5345) [ClassicSimilarity], result of:
          0.036309086 = score(doc=5345,freq=4.0), product of:
            0.08900621 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02727315 = queryNorm
            0.4079388 = fieldWeight in 5345, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=5345)
      0.1 = coord(1/10)
    
    Abstract
    Martin Scholz ist Informatiker an der Universitätsbibliothek Erlangen-Nürnberg. Als Leiter der dortigen Gruppe Digitale Entwicklung und Datenmanagement beschäftigt er sich viel mit Webtechniken und Datentransformation. Er setzt sich mit der aktuellen ABI-Techik-Frage auseinander: Wie können Daten im Web mit JSON nachgenutzt werden?

Languages

  • e 140
  • d 42
  • pt 1
  • More… Less…

Types

  • a 165
  • el 33
  • m 6
  • p 5
  • s 2
  • x 2
  • A 1
  • EL 1
  • More… Less…