Search (891 results, page 3 of 45)

  • × year_i:[2020 TO 2030}
  1. Al-Khatib, K.; Ghosa, T.; Hou, Y.; Waard, A. de; Freitag, D.: Argument mining for scholarly document processing : taking stock and looking ahead (2021) 0.02
    0.016343227 = product of:
      0.07354452 = sum of:
        0.05872617 = weight(_text_:applications in 568) [ClassicSimilarity], result of:
          0.05872617 = score(doc=568,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.34048924 = fieldWeight in 568, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0546875 = fieldNorm(doc=568)
        0.014818345 = weight(_text_:of in 568) [ClassicSimilarity], result of:
          0.014818345 = score(doc=568,freq=8.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.24188137 = fieldWeight in 568, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=568)
      0.22222222 = coord(2/9)
    
    Abstract
    Argument mining targets structures in natural language related to interpretation and persuasion. Most scholarly discourse involves interpreting experimental evidence and attempting to persuade other scientists to adopt the same conclusions, which could benefit from argument mining techniques. However, While various argument mining studies have addressed student essays and news articles, those that target scientific discourse are still scarce. This paper surveys existing work in argument mining of scholarly discourse, and provides an overview of current models, data, tasks, and applications. We identify a number of key challenges confronting argument mining in the scientific domain, and suggest some possible solutions and future directions.
    Source
    Proceedings of the Second Workshop on Scholarly Document Processing,
  2. Wiederhold, R.A.; Reeve, G.F.: Authority control today : principles, practices, and trends (2021) 0.02
    0.016343227 = product of:
      0.07354452 = sum of:
        0.05872617 = weight(_text_:applications in 696) [ClassicSimilarity], result of:
          0.05872617 = score(doc=696,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.34048924 = fieldWeight in 696, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0546875 = fieldNorm(doc=696)
        0.014818345 = weight(_text_:of in 696) [ClassicSimilarity], result of:
          0.014818345 = score(doc=696,freq=8.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.24188137 = fieldWeight in 696, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=696)
      0.22222222 = coord(2/9)
    
    Abstract
    Authority control enhances the accessibility of library resources by controlling the choice and form of access points, improving users' ability to efficiently find the works most relevant to their information search. While authority control and the technologies that support its implementation continue to evolve, the underlying principles and purposes remain the same. Written primarily for a new generation of librarians, this paper illuminates the importance of authority control in cataloging and library database management, discusses its history, describes current practices, and introduces readers to trends and issues in the field, including future applications beyond the library catalog.
  3. Das, S.; Bagchi, M.; Hussey, P.: How to teach domain ontology-based knowledge graph construction? : an Irish experiment (2023) 0.02
    0.016226128 = product of:
      0.048678383 = sum of:
        0.014968789 = weight(_text_:of in 1126) [ClassicSimilarity], result of:
          0.014968789 = score(doc=1126,freq=16.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.24433708 = fieldWeight in 1126, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1126)
        0.020439833 = weight(_text_:systems in 1126) [ClassicSimilarity], result of:
          0.020439833 = score(doc=1126,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.1697705 = fieldWeight in 1126, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1126)
        0.013269759 = product of:
          0.026539518 = sum of:
            0.026539518 = weight(_text_:22 in 1126) [ClassicSimilarity], result of:
              0.026539518 = score(doc=1126,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.19345059 = fieldWeight in 1126, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1126)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    Domains represent concepts which belong to specific parts of the world. The particularized meaning of words linguistically encoding such domain concepts are provided by domain specific resources. The explicit meaning of such words are increasingly captured computationally using domain-specific ontologies, which, even for the same reference domain, are most often than not semantically incompatible. As information systems that rely on domain ontologies expand, there is a growing need to not only design domain ontologies and domain ontology-grounded Knowl­edge Graphs (KGs) but also to align them to general standards and conventions for interoperability. This often presents an insurmountable challenge to domain experts who have to additionally learn the construction of domain ontologies and KGs. Until now, several research methodologies have been proposed by different research groups using different technical approaches and based on scenarios of different domains of application. However, no methodology has been proposed which not only facilitates designing conceptually well-founded ontologies, but is also, equally, grounded in the general pedagogical principles of knowl­edge organization and, thereby, flexible enough to teach, and reproduce vis-à-vis domain experts. The purpose of this paper is to provide such a general, pedagogically flexible semantic knowl­edge modelling methodology. We exemplify the methodology by examples and illustrations from a professional-level digital healthcare course, and conclude with an evaluation grounded in technological parameters as well as user experience design principles.
    Date
    20.11.2023 17:19:22
  4. Zhang, Y.; Wu, M.; Zhang, G.; Lu, J.: Stepping beyond your comfort zone : diffusion-based network analytics for knowledge trajectory recommendation (2023) 0.02
    0.015903872 = product of:
      0.047711615 = sum of:
        0.0140020205 = weight(_text_:of in 994) [ClassicSimilarity], result of:
          0.0140020205 = score(doc=994,freq=14.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.22855641 = fieldWeight in 994, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=994)
        0.020439833 = weight(_text_:systems in 994) [ClassicSimilarity], result of:
          0.020439833 = score(doc=994,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.1697705 = fieldWeight in 994, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=994)
        0.013269759 = product of:
          0.026539518 = sum of:
            0.026539518 = weight(_text_:22 in 994) [ClassicSimilarity], result of:
              0.026539518 = score(doc=994,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.19345059 = fieldWeight in 994, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=994)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    Predicting a researcher's knowledge trajectories beyond their current foci can leverage potential inter-/cross-/multi-disciplinary interactions to achieve exploratory innovation. In this study, we present a method of diffusion-based network analytics for knowledge trajectory recommendation. The method begins by constructing a heterogeneous bibliometric network consisting of a co-topic layer and a co-authorship layer. A novel link prediction approach with a diffusion strategy is then used to capture the interactions between social elements (e.g., collaboration) and knowledge elements (e.g., technological similarity) in the process of exploratory innovation. This diffusion strategy differentiates the interactions occurring among homogeneous and heterogeneous nodes in the heterogeneous bibliometric network and weights the strengths of these interactions. Two sets of experiments-one with a local dataset and the other with a global dataset-demonstrate that the proposed method is prior to 10 selected baselines in link prediction, recommender systems, and upstream graph representation learning. A case study recommending knowledge trajectories of information scientists with topical hierarchy and explainable mediators reveals the proposed method's reliability and potential practical uses in broad scenarios.
    Date
    22. 6.2023 18:07:12
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.7, S.775-790
  5. Palsdottir, A.: Data literacy and management of research data : a prerequisite for the sharing of research data (2021) 0.02
    0.015903013 = product of:
      0.047709037 = sum of:
        0.020741362 = weight(_text_:of in 183) [ClassicSimilarity], result of:
          0.020741362 = score(doc=183,freq=48.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.33856338 = fieldWeight in 183, product of:
              6.928203 = tf(freq=48.0), with freq of:
                48.0 = termFreq=48.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=183)
        0.016351866 = weight(_text_:systems in 183) [ClassicSimilarity], result of:
          0.016351866 = score(doc=183,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.1358164 = fieldWeight in 183, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03125 = fieldNorm(doc=183)
        0.010615807 = product of:
          0.021231614 = sum of:
            0.021231614 = weight(_text_:22 in 183) [ClassicSimilarity], result of:
              0.021231614 = score(doc=183,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.15476047 = fieldWeight in 183, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=183)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    Purpose The purpose of this paper is to investigate the knowledge and attitude about research data management, the use of data management methods and the perceived need for support, in relation to participants' field of research. Design/methodology/approach This is a quantitative study. Data were collected by an email survey and sent to 792 academic researchers and doctoral students. Total response rate was 18% (N = 139). The measurement instrument consisted of six sets of questions: about data management plans, the assignment of additional information to research data, about metadata, standard file naming systems, training at data management methods and the storing of research data. Findings The main finding is that knowledge about the procedures of data management is limited, and data management is not a normal practice in the researcher's work. They were, however, in general, of the opinion that the university should take the lead by recommending and offering access to the necessary tools of data management. Taken together, the results indicate that there is an urgent need to increase the researcher's understanding of the importance of data management that is based on professional knowledge and to provide them with resources and training that enables them to make effective and productive use of data management methods. Research limitations/implications The survey was sent to all members of the population but not a sample of it. Because of the response rate, the results cannot be generalized to all researchers at the university. Nevertheless, the findings may provide an important understanding about their research data procedures, in particular what characterizes their knowledge about data management and attitude towards it. Practical implications Awareness of these issues is essential for information specialists at academic libraries, together with other units within the universities, to be able to design infrastructures and develop services that suit the needs of the research community. The findings can be used, to develop data policies and services, based on professional knowledge of best practices and recognized standards that assist the research community at data management. Originality/value The study contributes to the existing literature about research data management by examining the results by participants' field of research. Recognition of the issues is critical in order for information specialists in collaboration with universities to design relevant infrastructures and services for academics and doctoral students that can promote their research data management.
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 73(2021) no.2, S.322-341
  6. Oh, D.-G.: Comparative analysis of national classification systems : cases of Korean Decimal Classification (KDC) and Nippon Decimal Classification (NDC) (2023) 0.02
    0.015680892 = product of:
      0.07056402 = sum of:
        0.02049686 = weight(_text_:of in 1121) [ClassicSimilarity], result of:
          0.02049686 = score(doc=1121,freq=30.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.33457235 = fieldWeight in 1121, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1121)
        0.050067157 = weight(_text_:systems in 1121) [ClassicSimilarity], result of:
          0.050067157 = score(doc=1121,freq=12.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.41585106 = fieldWeight in 1121, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1121)
      0.22222222 = coord(2/9)
    
    Abstract
    The Korean Decimal Classification (KDC) and Nippon Decimal Classification (NDC) are national classification systems of Korea and Japan. They have been used widely in many libraries of each country and maintained successfully by each national library associations of Korean Library Association (KLA) and Japan Library Association (JLA). This study compares the general characteristics of these two national classification systems using their latest editions of KDC 6 and NDC 10. After reviewing the former research, their origins, general history and development, and usages were briefly compared. Various aspects including classification by discipline, not by subjects, decimal expansion of the classes using pure notations of Arabic, hierarchical structure, and mnemonics quality are checked for both systems. Results of the comparative analyses of major auxiliary tables, main classes and 100 divisions of schedules of two systems are suggested one by one with special regards to Dewey Decimal Classification (DDC). The analyses focus on the differences between both systems as well as the characteristics which reflect the local situations of both countries. It suggests some ideas for future developments and research based on the results of their strengths and weaknesses.
  7. Krishnamurthy, M.; Satija, M.P.; Martínez-Ávila, D.: Classification of classifications : species of library classifications (2024) 0.02
    0.01542827 = product of:
      0.069427215 = sum of:
        0.02694382 = weight(_text_:of in 1158) [ClassicSimilarity], result of:
          0.02694382 = score(doc=1158,freq=36.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.43980673 = fieldWeight in 1158, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=1158)
        0.042483397 = weight(_text_:systems in 1158) [ClassicSimilarity], result of:
          0.042483397 = score(doc=1158,freq=6.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.35286134 = fieldWeight in 1158, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=1158)
      0.22222222 = coord(2/9)
    
    Abstract
    Acknowledging the importance of classification not only for library and information science but also for the study and mapping of the world phenomena, in this paper we revisit and systematize the main types of classifications and focus on the species of classification mainly drawing on the work of S. R. Ranganathan. We trace the evolution of library classification systems by their structures and modes of design of various shades of classification systems and make a comparative study of enumerative and faceted species of library classifications. The value of this paper is to have a picture of the whole spectrum of existing classifications, which may serve for the study of future developments and constructions of new systems. This paper updates previous works by Comaromi and Ranganathan and is also theoretically inspired by them.
  8. Buente, W.; Baybayan, C.K.; Hajibayova, L.; McCorkhill, M.; Panchyshyn, R.: Exploring the renaissance of wayfinding and voyaging through the lens of knowledge representation, organization and discovery systems (2020) 0.02
    0.015005652 = product of:
      0.06752543 = sum of:
        0.021820573 = weight(_text_:of in 105) [ClassicSimilarity], result of:
          0.021820573 = score(doc=105,freq=34.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.35617945 = fieldWeight in 105, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=105)
        0.045704857 = weight(_text_:systems in 105) [ClassicSimilarity], result of:
          0.045704857 = score(doc=105,freq=10.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.37961838 = fieldWeight in 105, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=105)
      0.22222222 = coord(2/9)
    
    Abstract
    The purpose of this paper is to provide a critical analysis from an ethical perspective of how the concept of indigenous wayfinding and voyaging is mapped in knowledge representation, organization and discovery systems. Design/methodology/approach In this study, the Dewey Decimal Classification, the Library of Congress Subject Headings, the Library of Congress Classifications systems and the Web of Science citation database were methodically examined to determine how these systems represent and facilitate the discovery of indigenous knowledge of wayfinding and voyaging. Findings The analysis revealed that there was no dedicated representation of the indigenous practices of wayfinding and voyaging in the major knowledge representation, organization and discovery systems. By scattering indigenous practice across various, often very broad and unrelated classes, coherence in the record is disrupted, resulting in misrepresentation of these indigenous concepts. Originality/value This study contributes to a relatively limited research literature on representation and organization of indigenous knowledge of wayfinding and voyaging. This study calls to foster a better understanding and appreciation for the rich knowledge that indigenous cultures provide for an enlightened society.
    Object
    Web of Science
    Source
    Journal of documentation. 76(2020) no.6, S.1279-1293
  9. Zaitseva, E.M.: Developing linguistic tools of thematic search in library information systems (2023) 0.01
    0.014860872 = product of:
      0.06687392 = sum of:
        0.021169065 = weight(_text_:of in 1187) [ClassicSimilarity], result of:
          0.021169065 = score(doc=1187,freq=32.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.34554482 = fieldWeight in 1187, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1187)
        0.045704857 = weight(_text_:systems in 1187) [ClassicSimilarity], result of:
          0.045704857 = score(doc=1187,freq=10.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.37961838 = fieldWeight in 1187, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1187)
      0.22222222 = coord(2/9)
    
    Abstract
    Within the R&D program "Information support of research by scientists and specialists on the basis of RNPLS&T Open Archive - the system of scientific knowledge aggregation", the RNPLS&T analyzes the use of linguistic tools of thematic search in the modern library information systems and the prospects for their development. The author defines the key common characteristics of e-catalogs of the largest Russian libraries revealed at the first stage of the analysis. Based on the specified common characteristics and detailed comparison analysis, the author outlines and substantiates the vectors for enhancing search inter faces of e-catalogs. The focus is made on linguistic tools of thematic search in library information systems; the key vectors are suggested: use of thematic search at different search levels with the clear-cut level differentiation; use of combined functionality within thematic search system; implementation of classification search in all e-catalogs; hierarchical representation of classifications; use of the matching systems for classification information retrieval languages, and in the long term classification and verbal information retrieval languages, and various verbal information retrieval languages. The author formulates practical recommendations to improve thematic search in library information systems.
  10. Kahlawi, A,: ¬An ontology driven ESCO LOD quality enhancement (2020) 0.01
    0.01464283 = product of:
      0.06589273 = sum of:
        0.050336715 = weight(_text_:applications in 5959) [ClassicSimilarity], result of:
          0.050336715 = score(doc=5959,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2918479 = fieldWeight in 5959, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.046875 = fieldNorm(doc=5959)
        0.015556021 = weight(_text_:of in 5959) [ClassicSimilarity], result of:
          0.015556021 = score(doc=5959,freq=12.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.25392252 = fieldWeight in 5959, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=5959)
      0.22222222 = coord(2/9)
    
    Abstract
    The labor market is a system that is complex and difficult to manage. To overcome this challenge, the European Union has launched the ESCO project which is a language that aims to describe this labor market. In order to support the spread of this project, its dataset was presented as linked open data (LOD). Since LOD is usable and reusable, a set of conditions have to be met. First, LOD must be feasible and high quality. In addition, it must provide the user with the right answers, and it has to be built according to a clear and correct structure. This study investigates the LOD of ESCO, focusing on data quality and data structure. The former is evaluated through applying a set of SPARQL queries. This provides solutions to improve its quality via a set of rules built in first order logic. This process was conducted based on a new proposed ESCO ontology.
    Source
    International journal of advanced computer science and applications 11(2020) no.3
  11. Machado, L.M.O.: Ontologies in knowledge organization (2021) 0.01
    0.014529166 = product of:
      0.065381244 = sum of:
        0.022897845 = weight(_text_:of in 198) [ClassicSimilarity], result of:
          0.022897845 = score(doc=198,freq=26.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.37376386 = fieldWeight in 198, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=198)
        0.042483397 = weight(_text_:systems in 198) [ClassicSimilarity], result of:
          0.042483397 = score(doc=198,freq=6.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.35286134 = fieldWeight in 198, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=198)
      0.22222222 = coord(2/9)
    
    Abstract
    Within the knowledge organization systems (KOS) set, the term "ontology" is paradigmatic of the terminological ambiguity in different typologies. Contributing to this situation is the indiscriminate association of the term "ontology", both as a specific type of KOS and as a process of categorization, due to the interdisciplinary use of the term with different meanings. We present a systematization of the perspectives of different authors of ontologies, as representational artifacts, seeking to contribute to terminological clarification. Focusing the analysis on the intention, semantics and modulation of ontologies, it was possible to notice two broad perspectives regarding ontologies as artifacts that coexist in the knowledge organization systems spectrum. We have ontologies viewed, on the one hand, as an evolution in terms of complexity of traditional conceptual systems, and on the other hand, as a system that organizes ontological rather than epistemological knowledge. The focus of ontological analysis is the item to model and not the intentions that motivate the construction of the system.
  12. Kleiner, J.; Ludwig, T.: If consciousness is dynamically relevant, artificial intelligence isn't conscious. (2023) 0.01
    0.014469368 = product of:
      0.06511216 = sum of:
        0.008467626 = weight(_text_:of in 1213) [ClassicSimilarity], result of:
          0.008467626 = score(doc=1213,freq=2.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.13821793 = fieldWeight in 1213, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=1213)
        0.05664453 = weight(_text_:systems in 1213) [ClassicSimilarity], result of:
          0.05664453 = score(doc=1213,freq=6.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.4704818 = fieldWeight in 1213, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0625 = fieldNorm(doc=1213)
      0.22222222 = coord(2/9)
    
    Abstract
    We demonstrate that if consciousness is relevant for the temporal evolution of a system's states--that is, if it is dynamically relevant--then AI systems cannot be conscious. That is because AI systems run on CPUs, GPUs, TPUs or other processors which have been designed and verified to adhere to computational dynamics that systematically preclude or suppress deviations. The design and verification preclude or suppress, in particular, potential consciousness-related dynamical effects, so that if consciousness is dynamically relevant, AI systems cannot be conscious.
  13. Xia, H.: What scholars and IRBs talk when they talk about the Belmont principles in crowd work-based research (2023) 0.01
    0.014341635 = product of:
      0.064537354 = sum of:
        0.050336715 = weight(_text_:applications in 843) [ClassicSimilarity], result of:
          0.050336715 = score(doc=843,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2918479 = fieldWeight in 843, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.046875 = fieldNorm(doc=843)
        0.014200641 = weight(_text_:of in 843) [ClassicSimilarity], result of:
          0.014200641 = score(doc=843,freq=10.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.23179851 = fieldWeight in 843, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=843)
      0.22222222 = coord(2/9)
    
    Abstract
    How scholars and IRBs perceive and apply the Belmont principles in crowd work-based research was an open and largely neglected question. As crowd work becomes increasingly popular for scholars to implement research and collect data, such negligence, signaling a lack of attention to the ethical issues in crowd work-based research more broadly, seemed alarming. To fill this gap, we conducted a qualitative study with 32 scholars and IRB directors/analysts in the United States to inquire into their perceptions and applications of the Belmont principles in crowd work-based research. We found two dilemmas in applying the Belmont principles in crowd work-based research, namely the dilemma between the dehumanization and expected autonomy of crowd workers, and the dilemma between the monetary incentive/reputationall risks and the conventional notion of research benefits/risks. We also compared the scholars' and IRBs' ethical perspectives and proposed our research implications for future work.
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.1, S.67-80
  14. Ahmed, M.; Mukhopadhyay, M.; Mukhopadhyay, P.: Automated knowledge organization : AI ML based subject indexing system for libraries (2023) 0.01
    0.014232597 = product of:
      0.06404669 = sum of:
        0.015876798 = weight(_text_:of in 977) [ClassicSimilarity], result of:
          0.015876798 = score(doc=977,freq=18.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.25915858 = fieldWeight in 977, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=977)
        0.048169892 = weight(_text_:software in 977) [ClassicSimilarity], result of:
          0.048169892 = score(doc=977,freq=4.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.30993375 = fieldWeight in 977, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=977)
      0.22222222 = coord(2/9)
    
    Abstract
    The research study as reported here is an attempt to explore the possibilities of an AI/ML-based semi-automated indexing system in a library setup to handle large volumes of documents. It uses the Python virtual environment to install and configure an open source AI environment (named Annif) to feed the LOD (Linked Open Data) dataset of Library of Congress Subject Headings (LCSH) as a standard KOS (Knowledge Organisation System). The framework deployed the Turtle format of LCSH after cleaning the file with Skosify, applied an array of backend algorithms (namely TF-IDF, Omikuji, and NN-Ensemble) to measure relative performance, and selected Snowball as an analyser. The training of Annif was conducted with a large set of bibliographic records populated with subject descriptors (MARC tag 650$a) and indexed by trained LIS professionals. The training dataset is first treated with MarcEdit to export it in a format suitable for OpenRefine, and then in OpenRefine it undergoes many steps to produce a bibliographic record set suitable to train Annif. The framework, after training, has been tested with a bibliographic dataset to measure indexing efficiencies, and finally, the automated indexing framework is integrated with data wrangling software (OpenRefine) to produce suggested headings on a mass scale. The entire framework is based on open-source software, open datasets, and open standards.
    Source
    DESIDOC journal of library and information technology. 43(2023) no.1, S.45-54
  15. Cox, A.: How artificial intelligence might change academic library work : applying the competencies literature and the theory of the professions (2023) 0.01
    0.01417063 = product of:
      0.063767835 = sum of:
        0.041947264 = weight(_text_:applications in 904) [ClassicSimilarity], result of:
          0.041947264 = score(doc=904,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2432066 = fieldWeight in 904, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0390625 = fieldNorm(doc=904)
        0.021820573 = weight(_text_:of in 904) [ClassicSimilarity], result of:
          0.021820573 = score(doc=904,freq=34.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.35617945 = fieldWeight in 904, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=904)
      0.22222222 = coord(2/9)
    
    Abstract
    The probable impact of artificial intelligence (AI) on work, including professional work, is contested, but it is unlikely to leave them untouched. The purpose of this conceptual paper is to consider the likelihood of the adoption of different approaches to AI in academic libraries. As theoretical lenses to guide the analysis the paper draws on both the library and information science (LIS) literature on librarians' competencies and the notions of jurisdiction and hybrid logics drawn from the sociological theory of the professions. The paper starts by outlining these theories and then reviews the nature of AI and the range of its potential uses in academic libraries. The main focus of the paper is on the application of AI to knowledge discovery. Eleven different potential approaches libraries might adopt to such AI applications are analyzed and their likelihood evaluated. Then it is considered how a range of internal and external factors might influence the adoption of AI. In addition to reflecting on the possible impact of AI on librarianship the paper contributes to understanding how to synthesize the competencies literature with the theory of the profession and presents a new understanding of librarians as hybrid.
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.3, S.367-380
  16. Biagetti, M.T.: Ontologies as knowledge organization systems (2021) 0.01
    0.013932516 = product of:
      0.06269632 = sum of:
        0.022227516 = weight(_text_:of in 439) [ClassicSimilarity], result of:
          0.022227516 = score(doc=439,freq=18.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.36282203 = fieldWeight in 439, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=439)
        0.04046881 = weight(_text_:systems in 439) [ClassicSimilarity], result of:
          0.04046881 = score(doc=439,freq=4.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.33612844 = fieldWeight in 439, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0546875 = fieldNorm(doc=439)
      0.22222222 = coord(2/9)
    
    Abstract
    This contribution presents the principal features of ontologies, drawing special attention to the comparison between ontologies and the different kinds of know­ledge organization systems (KOS). The focus is on the semantic richness exhibited by ontologies, which allows the creation of a great number of relationships between terms. That establishes ontologies as the most evolved type of KOS. The concepts of "conceptualization" and "formalization" and the key components of ontologies are described and discussed, along with upper and domain ontologies and special typologies, such as bibliographical ontologies and biomedical ontologies. The use of ontologies in the digital libraries environment, where they have replaced thesauri for query expansion in searching, and the role they are playing in the Semantic Web, especially for semantic interoperability, are sketched.
    Series
    Reviews of Concepts in Knowledge Organization
  17. Hahn, J.: Semi-automated methods for BIBFRAME work entity description (2021) 0.01
    0.0138898 = product of:
      0.0625041 = sum of:
        0.014818345 = weight(_text_:of in 725) [ClassicSimilarity], result of:
          0.014818345 = score(doc=725,freq=8.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.24188137 = fieldWeight in 725, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=725)
        0.047685754 = weight(_text_:software in 725) [ClassicSimilarity], result of:
          0.047685754 = score(doc=725,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.30681872 = fieldWeight in 725, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0546875 = fieldNorm(doc=725)
      0.22222222 = coord(2/9)
    
    Abstract
    This paper reports an investigation of machine learning methods for the semi-automated creation of a BIBFRAME Work entity description within the RDF linked data editor Sinopia (https://sinopia.io). The automated subject indexing software Annif was configured with the Library of Congress Subject Headings (LCSH) vocabulary from the Linked Data Service at https://id.loc.gov/. The training corpus was comprised of 9.3 million titles and LCSH linked data references from the IvyPlus POD project (https://pod.stanford.edu/) and from Share-VDE (https://wiki.share-vde.org). Semi-automated processes were explored to support and extend, not replace, professional expertise.
  18. Geras, A.; Siudem, G.; Gagolewski, M.: Time to vote : temporal clustering of user activity on Stack Overflow (2022) 0.01
    0.013763658 = product of:
      0.06193646 = sum of:
        0.021062955 = weight(_text_:of in 765) [ClassicSimilarity], result of:
          0.021062955 = score(doc=765,freq=22.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.34381276 = fieldWeight in 765, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=765)
        0.040873505 = weight(_text_:software in 765) [ClassicSimilarity], result of:
          0.040873505 = score(doc=765,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.2629875 = fieldWeight in 765, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=765)
      0.22222222 = coord(2/9)
    
    Abstract
    Question-and-answer (Q&A) sites improve access to information and ease transfer of knowledge. In recent years, they have grown in popularity and importance, enabling research on behavioral patterns of their users. We study the dynamics related to the casting of 7 M votes across a sample of 700 k posts on Stack Overflow, a large community of professional software developers. We employ log-Gaussian mixture modeling and Markov chains to formulate a simple yet elegant description of the considered phenomena. We indicate that the interevent times can naturally be clustered into 3 typical time scales: those which occur within hours, weeks, and months and show how the events become rarer and rarer as time passes. It turns out that the posts' popularity in a short period after publication is a weak predictor of its overall success, contrary to what was observed, for example, in case of YouTube clips. Nonetheless, the sleeping beauties sometimes awake and can receive bursts of votes following each other relatively quickly.
    Source
    Journal of the Association for Information Science and Technology. 73(2022) no.12, S.1681-1691
  19. Rubel, A.; Castro, C.; Pham, A.: Algorithms and autonomy : the ethics of automated decision systems (2021) 0.01
    0.013755783 = product of:
      0.061901025 = sum of:
        0.011833867 = weight(_text_:of in 671) [ClassicSimilarity], result of:
          0.011833867 = score(doc=671,freq=10.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.19316542 = fieldWeight in 671, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=671)
        0.050067157 = weight(_text_:systems in 671) [ClassicSimilarity], result of:
          0.050067157 = score(doc=671,freq=12.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.41585106 = fieldWeight in 671, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=671)
      0.22222222 = coord(2/9)
    
    Abstract
    Algorithms influence every facet of modern life: criminal justice, education, housing, entertainment, elections, social media, news feeds, work... the list goes on. Delegating important decisions to machines, however, gives rise to deep moral concerns about responsibility, transparency, freedom, fairness, and democracy. Algorithms and Autonomy connects these concerns to the core human value of autonomy in the contexts of algorithmic teacher evaluation, risk assessment in criminal sentencing, predictive policing, background checks, news feeds, ride-sharing platforms, social media, and election interference. Using these case studies, the authors provide a better understanding of machine fairness and algorithmic transparency. They explain why interventions in algorithmic systems are necessary to ensure that algorithms are not used to control citizens' participation in politics and undercut democracy. This title is also available as Open Access on Cambridge Core
    LCSH
    Decision support systems / Moral and ethical aspects
    Expert systems (Computer science) / Moral and ethical aspects
    Subject
    Decision support systems / Moral and ethical aspects
    Expert systems (Computer science) / Moral and ethical aspects
  20. Costas, R.; Rijcke, S. de; Marres, N.: "Heterogeneous couplings" : operationalizing network perspectives to study science-society interactions through social media metrics (2021) 0.01
    0.013722025 = product of:
      0.06174911 = sum of:
        0.041947264 = weight(_text_:applications in 215) [ClassicSimilarity], result of:
          0.041947264 = score(doc=215,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2432066 = fieldWeight in 215, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0390625 = fieldNorm(doc=215)
        0.019801848 = weight(_text_:of in 215) [ClassicSimilarity], result of:
          0.019801848 = score(doc=215,freq=28.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.32322758 = fieldWeight in 215, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=215)
      0.22222222 = coord(2/9)
    
    Abstract
    Social media metrics have a genuine networked nature, reflecting the networking characteristics of the social media platform from where they are derived. This networked nature has been relatively less explored in the literature on altmetrics, although new network-level approaches are starting to appear. A general conceptualization of the role of social media networks in science communication, and particularly of social media as a specific type of interface between science and society, is still missing. The aim of this paper is to provide a conceptual framework for appraising interactions between science and society in multiple directions, in what we call heterogeneous couplings. Heterogeneous couplings are conceptualized as the co-occurrence of science and non-science objects, actors, and interactions in online media environments. This conceptualization provides a common framework to study the interactions between science and non-science actors as captured via online and social media platforms. The conceptualization of heterogeneous couplings opens wider opportunities for the development of network applications and analyses of the interactions between societal and scholarly entities in social media environments, paving the way toward more advanced forms of altmetrics, social (media) studies of science, and the conceptualization and operationalization of more advanced science-society studies.
    Source
    Journal of the Association for Information Science and Technology. 72(2021) no.5, S.595-610

Languages

  • e 812
  • d 72
  • pt 4
  • sp 1
  • More… Less…

Types

  • a 833
  • el 108
  • m 23
  • p 13
  • x 4
  • s 3
  • A 1
  • EL 1
  • r 1
  • More… Less…

Themes

Subjects

Classifications