Search (42 results, page 1 of 3)

  • × language_ss:"e"
  • × type_ss:"a"
  • × type_ss:"el"
  1. Tay, A.: ¬The next generation discovery citation indexes : a review of the landscape in 2020 (2020) 0.08
    0.08146443 = product of:
      0.16292886 = sum of:
        0.16292886 = sum of:
          0.114715785 = weight(_text_:ii in 40) [ClassicSimilarity], result of:
            0.114715785 = score(doc=40,freq=2.0), product of:
              0.2745971 = queryWeight, product of:
                5.4016213 = idf(docFreq=541, maxDocs=44218)
                0.050836053 = queryNorm
              0.41776034 = fieldWeight in 40, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.4016213 = idf(docFreq=541, maxDocs=44218)
                0.0546875 = fieldNorm(doc=40)
          0.048213083 = weight(_text_:22 in 40) [ClassicSimilarity], result of:
            0.048213083 = score(doc=40,freq=2.0), product of:
              0.1780192 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050836053 = queryNorm
              0.2708308 = fieldWeight in 40, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=40)
      0.5 = coord(1/2)
    
    Abstract
    Conclusion There is a reason why Google Scholar and Web of Science/Scopus are kings of the hills in their various arenas. They have strong brand recogniton, a head start in development and a mass of eyeballs and users that leads to an almost virtious cycle of improvement. Competing against such well established competitors is not easy even when one has deep pockets (Microsoft) or a killer idea (scite). It will be interesting to see how the landscape will look like in 2030. Stay tuned for part II where I review each particular index.
    Date
    17.11.2020 12:22:59
  2. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.05
    0.05382742 = product of:
      0.10765484 = sum of:
        0.10765484 = product of:
          0.32296452 = sum of:
            0.32296452 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.32296452 = score(doc=230,freq=2.0), product of:
                0.4309886 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050836053 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  3. Francu, V.: Does convenience trump accuracy? : the avatars of the UDC in Romania (2007) 0.03
    0.028678946 = product of:
      0.057357892 = sum of:
        0.057357892 = product of:
          0.114715785 = sum of:
            0.114715785 = weight(_text_:ii in 544) [ClassicSimilarity], result of:
              0.114715785 = score(doc=544,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.41776034 = fieldWeight in 544, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=544)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper will concentrate on some major issues regarding the potential of UDC and the current controversy about its use UDC in Romania: i) the importance of hierarchical structures in controlled vocabularies with a direct impact on improved information retrieval given by the browsing function which enables visualizing the hierarchies in subject areas rather than just locating a particular topic; ii) the lack of popularity of the UDC as an indexing and information retrieval language among its users be they librarians or end users of library OPACs; and iii) the situation of UDC teachers and teaching in Romanian universities.
  4. Chan, L.M.; Zeng, M.L.: Metadata interoperability and standardization - a study of methodology, part II : achieving interoperability at the record and repository levels (2006) 0.03
    0.028384795 = product of:
      0.05676959 = sum of:
        0.05676959 = product of:
          0.11353918 = sum of:
            0.11353918 = weight(_text_:ii in 1177) [ClassicSimilarity], result of:
              0.11353918 = score(doc=1177,freq=6.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.4134755 = fieldWeight in 1177, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1177)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This is the second part of an analysis of the methods that have been used to achieve or improve interoperability among metadata schemas and their applications in order to facilitate the conversion and exchange of metadata and to enable cross-domain metadata harvesting and federated searches. From a methodological point of view, implementing interoperability may be considered at different levels of operation: schema level (discussed in Part I of the article), record level (discussed in Part II of the article), and repository level (also discussed in Part II). The results of efforts to improve interoperability may be observed from different perspectives as well, including element-based and value-based approaches. As discussed in Part I of this study, the results of efforts to improve interoperability can be observed at different levels: 1. Schema level - Efforts are focused on the elements of the schemas, being independent of any applications. The results usually appear as derived element sets or encoded schemas, crosswalks, application profiles, and element registries. 2. Record level - Efforts are intended to integrate the metadata records through the mapping of the elements according to the semantic meanings of these elements. Common results include converted records and new records resulting from combining values of existing records. 3. Repository level - With harvested or integrated records from varying sources, efforts at this level focus on mapping value strings associated with particular elements (e.g., terms associated with subject or format elements). The results enable cross-collection searching. In the following sections, we will continue to analyze interoperability efforts and methodologies, focusing on the record level and the repository level. It should be noted that the models to be discussed in this article are not always mutually exclusive. Sometimes, within a particular project, more than one method may be used.
  5. Aerts, D.; Broekaert, J.; Sozzo, S.; Veloz, T.: Meaning-focused and quantum-inspired information retrieval (2013) 0.02
    0.024581954 = product of:
      0.049163908 = sum of:
        0.049163908 = product of:
          0.098327816 = sum of:
            0.098327816 = weight(_text_:ii in 735) [ClassicSimilarity], result of:
              0.098327816 = score(doc=735,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.3580803 = fieldWeight in 735, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.046875 = fieldNorm(doc=735)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In recent years, quantum-based methods have promisingly integrated the traditional procedures in information retrieval (IR) and natural language processing (NLP). Inspired by our research on the identification and application of quantum structures in cognition, more specifically our work on the representation of concepts and their combinations, we put forward a 'quantum meaning based' framework for structured query retrieval in text corpora and standardized testing corpora. This scheme for IR rests on considering as basic notions, (i) 'entities of meaning', e.g., concepts and their combinations and (ii) traces of such entities of meaning, which is how documents are considered in this approach. The meaning content of these 'entities of meaning' is reconstructed by solving an 'inverse problem' in the quantum formalism, consisting of reconstructing the full states of the entities of meaning from their collapsed states identified as traces in relevant documents. The advantages with respect to traditional approaches, such as Latent Semantic Analysis (LSA), are discussed by means of concrete examples.
  6. Mohr, J.W.; Bogdanov, P.: Topic models : what they are and why they matter (2013) 0.02
    0.024581954 = product of:
      0.049163908 = sum of:
        0.049163908 = product of:
          0.098327816 = sum of:
            0.098327816 = weight(_text_:ii in 1142) [ClassicSimilarity], result of:
              0.098327816 = score(doc=1142,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.3580803 = fieldWeight in 1142, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1142)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We provide a brief, non-technical introduction to the text mining methodology known as "topic modeling." We summarize the theory and background of the method and discuss what kinds of things are found by topic models. Using a text corpus comprised of the eight articles from the special issue of Poetics on the subject of topic models, we run a topic model on these articles, both as a way to introduce the methodology and also to help summarize some of the ways in which social and cultural scientists are using topic models. We review some of the critiques and debates over the use of the method and finally, we link these developments back to some of the original innovations in the field of content analysis that were pioneered by Harold D. Lasswell and colleagues during and just after World War II.
  7. Daquino, M.; Peroni, S.; Shotton, D.; Colavizza, G.; Ghavimi, B.; Lauscher, A.; Mayr, P.; Romanello, M.; Zumstein, P.: ¬The OpenCitations Data Model (2020) 0.02
    0.024581954 = product of:
      0.049163908 = sum of:
        0.049163908 = product of:
          0.098327816 = sum of:
            0.098327816 = weight(_text_:ii in 38) [ClassicSimilarity], result of:
              0.098327816 = score(doc=38,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.3580803 = fieldWeight in 38, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.046875 = fieldNorm(doc=38)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Erschienen in: The Semantic Web - ISWC 2020, 19th International Semantic Web Conference, Athens, Greece, November 2-6, 2020, Proceedings, Part II. Vgl.: DOI: 10.1007/978-3-030-62466-8_28.
  8. Van der Veer Martens, B.: Do citation systems represent theories of truth? (2001) 0.02
    0.024351284 = product of:
      0.048702568 = sum of:
        0.048702568 = product of:
          0.097405136 = sum of:
            0.097405136 = weight(_text_:22 in 3925) [ClassicSimilarity], result of:
              0.097405136 = score(doc=3925,freq=4.0), product of:
                0.1780192 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050836053 = queryNorm
                0.54716086 = fieldWeight in 3925, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3925)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 15:22:28
  9. Dunning, A.: Do we still need search engines? (1999) 0.02
    0.024106542 = product of:
      0.048213083 = sum of:
        0.048213083 = product of:
          0.09642617 = sum of:
            0.09642617 = weight(_text_:22 in 6021) [ClassicSimilarity], result of:
              0.09642617 = score(doc=6021,freq=2.0), product of:
                0.1780192 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050836053 = queryNorm
                0.5416616 = fieldWeight in 6021, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6021)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Ariadne. 1999, no.22
  10. DiLauro, T.; Choudhury, G.S.; Patton, M.; Warner, J.W.; Brown, E.W.: Automated name authority control and enhanced searching in the Levy collection (2001) 0.02
    0.023176087 = product of:
      0.046352174 = sum of:
        0.046352174 = product of:
          0.09270435 = sum of:
            0.09270435 = weight(_text_:ii in 1160) [ClassicSimilarity], result of:
              0.09270435 = score(doc=1160,freq=4.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.33760133 = fieldWeight in 1160, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1160)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper is the second in a series in D-Lib Magazine and describes a workflow management system being developed by the Digital Knowledge Center (DKC) at the Milton S. Eisenhower Library (MSEL) of The Johns Hopkins University. Based on experience from digitizing the Lester S. Levy Collection of Sheet Music, it was apparent that large-scale digitization efforts require a significant amount of human labor that is both time-consuming and costly. Consequently, this workflow management system aims to reduce the amount of human labor and time for large-scale digitization projects. The mission of this second phase of the project ("Levy II") can be summarized as follows: * Reduce costs for large collection ingestion by creating a suite of open-source processes, tools, and interfaces for workflow management * Increase access capabilities by providing a suite of research tools * Demonstrate utility of tools and processes with a subset of the online Levy Collection The cornerstones of the workflow management system include optical music recognition (OMR) software and an automated name authority control system (ANAC). The OMR software generates a logical representation of the score for sound generation, music searching, and musicological research. The ANAC disambiguates names, associating each name with an individual (e.g., the composer Septimus Winner also published under the pseudonyms Alice Hawthorne and Apsley Street, among others). Complementing the workflow tools, a suite of research tools focuses upon enhanced searching capabilities through the development and application of a fast, disk-based search engine for lyrics and music and the incorporation of an XML structure for metadata. The first paper (Choudhury et al. 2001) described the OMR software and musical components of Levy II. This paper focuses on the metadata and intellectual access components that include automated name authority control and the aforementioned search engine.
  11. Gil-Berrozpe, J.C.: Description, categorization, and representation of hyponymy in environmental terminology (2022) 0.02
    0.023176087 = product of:
      0.046352174 = sum of:
        0.046352174 = product of:
          0.09270435 = sum of:
            0.09270435 = weight(_text_:ii in 1004) [ClassicSimilarity], result of:
              0.09270435 = score(doc=1004,freq=4.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.33760133 = fieldWeight in 1004, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1004)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Terminology has evolved from static and prescriptive theories to dynamic and cognitive approaches. Thanks to these approaches, there have been significant advances in the design and elaboration of terminological resources. This has resulted in the creation of tools such as terminological knowledge bases, which are able to show how concepts are interrelated through different semantic or conceptual relations. Of these relations, hyponymy is the most relevant to terminology work because it deals with concept categorization and term hierarchies. This doctoral thesis presents an enhancement of the semantic structure of EcoLexicon, a terminological knowledge base on environmental science. The aim of this research was to improve the description, categorization, and representation of hyponymy in environmental terminology. Therefore, we created HypoLexicon, a new stand-alone module for EcoLexicon in the form of a hyponymy-based terminological resource. This resource contains twelve terminological entries from four specialized domains (Biology, Chemistry, Civil Engineering, and Geology), which consist of 309 concepts and 465 terms associated with those concepts. This research was mainly based on the theoretical premises of Frame-based Terminology. This theory was combined with Cognitive Linguistics, for conceptual description and representation; Corpus Linguistics, for the extraction and processing of linguistic and terminological information; and Ontology, related to hyponymy and relevant for concept categorization. HypoLexicon was constructed from the following materials: (i) the EcoLexicon English Corpus; (ii) other specialized terminological resources, including EcoLexicon; (iii) Sketch Engine; and (iv) Lexonomy. This thesis explains the methodologies applied for corpus extraction and compilation, corpus analysis, the creation of conceptual hierarchies, and the design of the terminological template. The results of the creation of HypoLexicon are discussed by highlighting the information in the hyponymy-based terminological entries: (i) parent concept (hypernym); (ii) child concepts (hyponyms, with various hyponymy levels); (iii) terminological definitions; (iv) conceptual categories; (v) hyponymy subtypes; and (vi) hyponymic contexts. Furthermore, the features and the navigation within HypoLexicon are described from the user interface and the admin interface. In conclusion, this doctoral thesis lays the groundwork for developing a terminological resource that includes definitional, relational, ontological and contextual information about specialized hypernyms and hyponyms. All of this information on specialized knowledge is simple to follow thanks to the hierarchical structure of the terminological template used in HypoLexicon. Therefore, not only does it enhance knowledge representation, but it also facilitates its acquisition.
  12. Qin, J.; Paling, S.: Converting a controlled vocabulary into an ontology : the case of GEM (2001) 0.02
    0.02066275 = product of:
      0.0413255 = sum of:
        0.0413255 = product of:
          0.082651 = sum of:
            0.082651 = weight(_text_:22 in 3895) [ClassicSimilarity], result of:
              0.082651 = score(doc=3895,freq=2.0), product of:
                0.1780192 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050836053 = queryNorm
                0.46428138 = fieldWeight in 3895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3895)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    24. 8.2005 19:20:22
  13. Chan, L.M.; Zeng, M.L.: Metadata interoperability and standardization - a study of methodology, part I : achieving interoperability at the schema level (2006) 0.02
    0.020484962 = product of:
      0.040969923 = sum of:
        0.040969923 = product of:
          0.08193985 = sum of:
            0.08193985 = weight(_text_:ii in 1176) [ClassicSimilarity], result of:
              0.08193985 = score(doc=1176,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.29840025 = fieldWeight in 1176, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1176)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The rapid growth of Internet resources and digital collections has been accompanied by a proliferation of metadata schemas, each of which has been designed based on the requirements of particular user communities, intended users, types of materials, subject domains, project needs, etc. Problems arise when building large digital libraries or repositories with metadata records that were prepared according to diverse schemas. This article (published in two parts) contains an analysis of the methods that have been used to achieve or improve interoperability among metadata schemas and applications, for the purposes of facilitating conversion and exchange of metadata and enabling cross-domain metadata harvesting and federated searches. From a methodological point of view, implementing interoperability may be considered at different levels of operation: schema level, record level, and repository level. Part I of the article intends to explain possible situations in which metadata schemas may be created or implemented, whether in individual projects or in integrated repositories. It also discusses approaches used at the schema level. Part II of the article will discuss metadata interoperability efforts at the record and repository levels.
  14. Hammond, T.; Hannay, T.; Lund, B.; Flack, M.: Social bookmarking tools (II) : a case study - Connotea (2005) 0.02
    0.020484962 = product of:
      0.040969923 = sum of:
        0.040969923 = product of:
          0.08193985 = sum of:
            0.08193985 = weight(_text_:ii in 1189) [ClassicSimilarity], result of:
              0.08193985 = score(doc=1189,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.29840025 = fieldWeight in 1189, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1189)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  15. Carbonaro, A.; Santandrea, L.: ¬A general Semantic Web approach for data analysis on graduates statistics 0.02
    0.020484962 = product of:
      0.040969923 = sum of:
        0.040969923 = product of:
          0.08193985 = sum of:
            0.08193985 = weight(_text_:ii in 5309) [ClassicSimilarity], result of:
              0.08193985 = score(doc=5309,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.29840025 = fieldWeight in 5309, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5309)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Currently, several datasets released in a Linked Open Data format are available at a national and international level, but the lack of shared strategies concerning the definition of concepts related to the statistical publishing community makes difficult a comparison among given facts starting from different data sources. In order to guarantee a shared representation framework for what concerns the dissemination of statistical concepts about graduates, we developed SW4AL, an ontology-based system for graduate's surveys domain. The developed system transforms low-level data into an enriched information model and is based on the AlmaLaurea surveys covering more than 90% of Italian graduates. SW4AL: i) semantically describes the different peculiarities of the graduates; ii) promotes the structured definition of the AlmaLaurea data and the following publication in the Linked Open Data context; iii) provides their reuse in the open data scope; iv) enables logical reasoning about knowledge representation. SW4AL establishes a common semantic for addressing the concept of graduate's surveys domain by proposing the creation of a SPARQL endpoint and a Web based interface for the query and the visualization of the structured data.
  16. Baines, D.; Elliott, R.J.: Defining misinformation, disinformation and malinformation : an urgent need for clarity during the COVID-19 infodemic (2020) 0.02
    0.020484962 = product of:
      0.040969923 = sum of:
        0.040969923 = product of:
          0.08193985 = sum of:
            0.08193985 = weight(_text_:ii in 5853) [ClassicSimilarity], result of:
              0.08193985 = score(doc=5853,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.29840025 = fieldWeight in 5853, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5853)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    COVID-19 is an unprecedented global health crisis that will have immeasurable consequences for our economic and social well-being. Tedros Adhanom Ghebreyesus, the director general of the World Health Organization, stated "We're not just fighting an epidemic; we're fighting an infodemic". Currently, there is no robust scientific basis to the existing definitions of false information used in the fight against the COVID-19infodemic. The purpose of this paper is to demonstrate how the use of a novel taxonomy and related model (based upon a conceptual framework that synthesizes insights from information science, philosophy, media studies and politics) can produce new scientific definitions of mis-, dis- and malinformation. We undertake our analysis from the viewpoint of information systems research. The conceptual approach to defining mis-,dis- and malinformation can be applied to a wide range of empirical examples and, if applied properly, may prove useful in fighting the COVID-19 infodemic. In sum, our research suggests that: (i) analyzing all types of information is important in the battle against the COVID-19 infodemic; (ii) a scientific approach is required so that different methods are not used by different studies; (iii) "misinformation", as an umbrella term, can be confusing and should be dropped from use; (iv) clear, scientific definitions of information types will be needed going forward; (v) malinformation is an overlooked phenomenon involving reconfigurations of the truth.
  17. Guidi, F.; Sacerdoti Coen, C.: ¬A survey on retrieval of mathematical knowledge (2015) 0.02
    0.017218959 = product of:
      0.034437917 = sum of:
        0.034437917 = product of:
          0.068875834 = sum of:
            0.068875834 = weight(_text_:22 in 5865) [ClassicSimilarity], result of:
              0.068875834 = score(doc=5865,freq=2.0), product of:
                0.1780192 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050836053 = queryNorm
                0.38690117 = fieldWeight in 5865, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5865)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 2.2017 12:51:57
  18. Sojka, P.; Liska, M.: ¬The art of mathematics retrieval (2011) 0.02
    0.0170459 = product of:
      0.0340918 = sum of:
        0.0340918 = product of:
          0.0681836 = sum of:
            0.0681836 = weight(_text_:22 in 3450) [ClassicSimilarity], result of:
              0.0681836 = score(doc=3450,freq=4.0), product of:
                0.1780192 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050836053 = queryNorm
                0.38301262 = fieldWeight in 3450, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3450)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Vgl.: DocEng2011, September 19-22, 2011, Mountain View, California, USA Copyright 2011 ACM 978-1-4503-0863-2/11/09
    Date
    22. 2.2017 13:00:42
  19. Wei, W.; Ram, S.: Utilizing sozial bookmarking tag space for Web content discovery : a social network analysis approach (2010) 0.02
    0.01638797 = product of:
      0.03277594 = sum of:
        0.03277594 = product of:
          0.06555188 = sum of:
            0.06555188 = weight(_text_:ii in 1) [ClassicSimilarity], result of:
              0.06555188 = score(doc=1,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.2387202 = fieldWeight in 1, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Social bookmarking has gained popularity since the advent of Web 2.0. Keywords known as tags are created to annotate web content, and the resulting tag space composed of the tags, the resources, and the users arises as a new platform for web content discovery. Useful and interesting web resources can be located through searching and browsing based on tags, as well as following the user-user connections formed in the social bookmarking community. However, the effectiveness of tag-based search is limited due to the lack of explicitly represented semantics in the tag space. In addition, social connections between users are underused for web content discovery because of the inadequate social functions. In this research, we propose a comprehensive framework to reorganize the flat tag space into a hierarchical faceted model. We also studied the structure and properties of various networks emerging from the tag space for the purpose of more efficient web content discovery. The major research approach used in this research is social network analysis (SNA), together with methodologies employed in design science research. The contribution of our research includes: (i) a faceted model to categorize social bookmarking tags; (ii) a relationship ontology to represent the semantics of relationships between tags; (iii) heuristics to reorganize the flat tag space into a hierarchical faceted model using analysis of tag-tag co-occurrence networks; (iv) an implemented prototype system as proof-of-concept to validate the feasibility of the reorganization approach; (v) a set of evaluations of the social functions of the current networking features of social bookmarking and a series of recommendations as to how to improve the social functions to facilitate web content discovery.
  20. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.01
    0.014610772 = product of:
      0.029221544 = sum of:
        0.029221544 = product of:
          0.058443088 = sum of:
            0.058443088 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
              0.058443088 = score(doc=1967,freq=4.0), product of:
                0.1780192 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050836053 = queryNorm
                0.32829654 = fieldWeight in 1967, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1967)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.

Years