Search (35537 results, page 1777 of 1777)

  1. Skulimowski, A.M.J.; Köhler, T.: ¬A future-oriented approach to the selection of artificial intelligence technologies for knowledge platforms (2023) 0.00
    1.5011833E-4 = product of:
      0.0025520115 = sum of:
        0.0025520115 = weight(_text_:in in 1015) [ClassicSimilarity], result of:
          0.0025520115 = score(doc=1015,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.07514416 = fieldWeight in 1015, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1015)
      0.05882353 = coord(1/17)
    
    Abstract
    This article presents approaches used to solve the problem of selecting AI technologies and tools to obtain the creativity fostering functionalities of an innovative knowledge platform. The aforementioned selection problem has been lagging behind other software-specific aspects of online knowledge platform and learning platform development so far. We linked technological recommendations from group decision support exercises to the platform design aims and constraints using an expert Delphi survey and multicriteria analysis methods. The links between expected advantages of using selected AI building tools, AI-related system functionalities, and their ongoing relevance until 2030 were assessed and used to optimize the learning scenarios and in planning the future development of the platform. The selected technologies allowed the platform management to implement the desired functionalities, thus harnessing the potential of open innovation platforms more effectively and delivering a model for the development of a relevant class of advanced open-access knowledge provision systems. Additionally, our approach is an essential part of digital sustainability and AI-alignment strategy for the aforementioned class of systems. The knowledge platform, which serves as a case study for our methodology has been developed within an EU Horizon 2020 research project.
  2. Bao, X.; Ke, P.: Chaos, expansion, and contraction : the information worlds of depression patients during the COVID-19 pandemic lockdown (2023) 0.00
    1.5011833E-4 = product of:
      0.0025520115 = sum of:
        0.0025520115 = weight(_text_:in in 1020) [ClassicSimilarity], result of:
          0.0025520115 = score(doc=1020,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.07514416 = fieldWeight in 1020, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1020)
      0.05882353 = coord(1/17)
    
    Abstract
    Although there have been several studies of people's information behaviors during the COVID-19 epidemic, the information practices of a specific group of people-those with depression-have been neglected. This study reports on qualitative interviews with 24 participants to explore the information practices of people living with depression during the pandemic lockdown. We use the theory of information worlds and the concept of transition to understand the phases of chaos, expansion, and contraction that the information worlds of this group present during the lockdown, and examine the interrelationship between the information worlds and the individual's transition experiences during specific periods. Our results show that, first, emotion, body, and embodiment play key roles in the individual's information worlds, while individuals' information practices are regulated by the economic environment, group norms, and other social circumstances. Second, government-level information, the volunteer community, social media, and nonhuman objects further impact participants' information practices. We suggest that health management strategies need to have different priorities at different stages of the transition, and attention should be paid to the provision of emotional information support systems during the pandemic lockdown.
  3. Ahmed, M.: Automatic indexing for agriculture : designing a framework by deploying Agrovoc, Agris and Annif (2023) 0.00
    1.5011833E-4 = product of:
      0.0025520115 = sum of:
        0.0025520115 = weight(_text_:in in 1024) [ClassicSimilarity], result of:
          0.0025520115 = score(doc=1024,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.07514416 = fieldWeight in 1024, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1024)
      0.05882353 = coord(1/17)
    
    Abstract
    There are several ways to employ machine learning for automating subject indexing. One popular strategy is to utilize a supervised learning algorithm to train a model on a set of documents that have been manually indexed by subject matter using a standard vocabulary. The resulting model can then predict the subject of new and previously unseen documents by identifying patterns learned from the training data. To do this, the first step is to gather a large dataset of documents and manually assign each document a set of subject keywords/descriptors from a controlled vocabulary (e.g., from Agrovoc). Next, the dataset (obtained from Agris) can be divided into - i) a training dataset, and ii) a test dataset. The training dataset is used to train the model, while the test dataset is used to evaluate the model's performance. Machine learning can be a powerful tool for automating the process of subject indexing. This research is an attempt to apply Annif (http://annif. org/), an open-source AI/ML framework, to autogenerate subject keywords/descriptors for documentary resources in the domain of agriculture. The training dataset is obtained from Agris, which applies the Agrovoc thesaurus as a vocabulary tool (https://www.fao.org/agris/download).
  4. Trace, C.B.; Zhang, Y.; Yi, S.; Williams-Brown, M.Y.: Information practices around genetic testing for ovarian cancer patients (2023) 0.00
    1.5011833E-4 = product of:
      0.0025520115 = sum of:
        0.0025520115 = weight(_text_:in in 1071) [ClassicSimilarity], result of:
          0.0025520115 = score(doc=1071,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.07514416 = fieldWeight in 1071, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1071)
      0.05882353 = coord(1/17)
    
    Abstract
    Knowledge of ovarian cancer patients' information practices around cancer genetic testing (GT) is needed to inform interventions that promote patient access to GT-related information. We interviewed 21 ovarian cancer patients and survivors who had GT as part of the treatment process and analyzed the transcripts using the qualitative content analysis method. We found that patients' information practices, manifested in their information-seeking mode, information sources utilized, information assessment, and information use, showed three distinct styles: passive, semi-active, and active. Patients with the passive style primarily received information from clinical sources, encountered information, or delegated information-seeking to family members; they were not inclined to assess information themselves and seldom used it to learn or influence others. Women with semi-active and active styles adopted more active information-seeking modes to approach information, utilized information sources beyond clinical settings, attempted to assess the information found, and actively used it to learn, educate others, or advocate GT to family and friends. Guided by the social ecological model, we found multiple levels of influences, including personal, interpersonal, organizational, community, and societal, acting as motivators or barriers to patients' information practice. Based on these findings, we discussed strategies to promote patient access to GT-related information.
  5. Rafferty, P.: Genre as knowledge organization (2022) 0.00
    1.5011833E-4 = product of:
      0.0025520115 = sum of:
        0.0025520115 = weight(_text_:in in 1093) [ClassicSimilarity], result of:
          0.0025520115 = score(doc=1093,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.07514416 = fieldWeight in 1093, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1093)
      0.05882353 = coord(1/17)
    
    Series
    Reviews of concepts in knowledge organization
  6. Oh, D.-G.: Comparative analysis of national classification systems : cases of Korean Decimal Classification (KDC) and Nippon Decimal Classification (NDC) (2023) 0.00
    1.5011833E-4 = product of:
      0.0025520115 = sum of:
        0.0025520115 = weight(_text_:in in 1121) [ClassicSimilarity], result of:
          0.0025520115 = score(doc=1121,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.07514416 = fieldWeight in 1121, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1121)
      0.05882353 = coord(1/17)
    
    Abstract
    The Korean Decimal Classification (KDC) and Nippon Decimal Classification (NDC) are national classification systems of Korea and Japan. They have been used widely in many libraries of each country and maintained successfully by each national library associations of Korean Library Association (KLA) and Japan Library Association (JLA). This study compares the general characteristics of these two national classification systems using their latest editions of KDC 6 and NDC 10. After reviewing the former research, their origins, general history and development, and usages were briefly compared. Various aspects including classification by discipline, not by subjects, decimal expansion of the classes using pure notations of Arabic, hierarchical structure, and mnemonics quality are checked for both systems. Results of the comparative analyses of major auxiliary tables, main classes and 100 divisions of schedules of two systems are suggested one by one with special regards to Dewey Decimal Classification (DDC). The analyses focus on the differences between both systems as well as the characteristics which reflect the local situations of both countries. It suggests some ideas for future developments and research based on the results of their strengths and weaknesses.
  7. Jia, R.M.; Du, J.T.; Zhao, Y.(C.): Interaction with peers online : LGBTQIA+ individuals' information seeking and meaning-making during the life transitions of identity construction (2024) 0.00
    1.5011833E-4 = product of:
      0.0025520115 = sum of:
        0.0025520115 = weight(_text_:in in 1198) [ClassicSimilarity], result of:
          0.0025520115 = score(doc=1198,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.07514416 = fieldWeight in 1198, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1198)
      0.05882353 = coord(1/17)
    
    Abstract
    People search for information and experiences and seek meaning as a common reaction to new life challenges. There is little knowledge about the interactions through which experiential information is acquired, and how such interactions are meaningful to an information seeker. Through a qualitative content analysis of 992 posts in an online forum, this study investigated lesbian, gay, bisexual, transgender, intersex, queer/questioning, asexual (LGBTQIA+) individuals' online information interactions and meaning-making with peers during their life transitions of identity construction. Our analysis reveals LGBTQIA+ people's life challenges across three transition stages (being aware of, exploring, and living with a new identity). Three main types of online peer interactions were identified within: cognitive, affective, and situational peer interactions. We found that online peer interactions are not only a type of information source that LGBTQIA+ individuals use to acquire understanding about themselves but a unique space for transformation learning and meaning-making where they share self-examination and reflection, conduct assessments and assumptions, and obtain strength and skills to initiate and adapt life transitions. The findings have theoretical contributions to the development of information behavior models of transitions and practical implications on providing information services that support LGBTQIA+ individuals' meaning-making during the life transition.
  8. Gu, D.; Liu, H.; Zhao, H.; Yang, X.; Li, M.; Lian, C.: ¬A deep learning and clustering-based topic consistency modeling framework for matching health information supply and demand (2024) 0.00
    1.5011833E-4 = product of:
      0.0025520115 = sum of:
        0.0025520115 = weight(_text_:in in 1209) [ClassicSimilarity], result of:
          0.0025520115 = score(doc=1209,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.07514416 = fieldWeight in 1209, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1209)
      0.05882353 = coord(1/17)
    
    Abstract
    Improving health literacy through health information dissemination is one of the most economical and effective mechanisms for improving population health. This process needs to fully accommodate the thematic suitability of health information supply and demand and reduce the impact of information overload and supply-demand mismatch on the enthusiasm of health information acquisition. We propose a health information topic modeling analysis framework that integrates deep learning methods and clustering techniques to model the supply-side and demand-side topics of health information and to quantify the thematic alignment of supply and demand. To validate the effectiveness of the framework, we have conducted an empirical analysis on a dataset with 90,418 pieces of textual data from two prominent social networking platforms. The results show that the supply of health information in general has not yet met the demand, the demand for health information has not yet been met to a considerable extent, especially for disease-related topics, and there is clear inconsistency between the supply and demand sides for the same health topics. Public health policy-making departments and content producers can adjust their information selection and dissemination strategies according to the distribution of identified health topics, thereby improving the effectiveness of public health information dissemination.
  9. Spinning the Semantic Web : bringing the World Wide Web to its full potential (2003) 0.00
    1.4860956E-4 = product of:
      0.0025263624 = sum of:
        0.0025263624 = weight(_text_:in in 1981) [ClassicSimilarity], result of:
          0.0025263624 = score(doc=1981,freq=4.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.07438892 = fieldWeight in 1981, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1981)
      0.05882353 = coord(1/17)
    
    Abstract
    As the World Wide Web continues to expand, it becomes increasingly difficult for users to obtain information efficiently. Because most search engines read format languages such as HTML or SGML, search results reflect formatting tags more than actual page content, which is expressed in natural language. Spinning the Semantic Web describes an exciting new type of hierarchy and standardization that will replace the current "Web of links" with a "Web of meaning." Using a flexible set of languages and tools, the Semantic Web will make all available information - display elements, metadata, services, images, and especially content - accessible. The result will be an immense repository of information accessible for a wide range of new applications. This first handbook for the Semantic Web covers, among other topics, software agents that can negotiate and collect information, markup languages that can tag many more types of information in a document, and knowledge systems that enable machines to read Web pages and determine their reliability. The truly interdisciplinary Semantic Web combines aspects of artificial intelligence, markup languages, natural language processing, information retrieval, knowledge representation, intelligent agents, and databases.
  10. Reneker, M.; Jacobson, A.; Wargo, L.; Spink, A.: Information environment of a military university campus : an exploratory study (1999) 0.00
    1.2009465E-4 = product of:
      0.002041609 = sum of:
        0.002041609 = weight(_text_:in in 6704) [ClassicSimilarity], result of:
          0.002041609 = score(doc=6704,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.060115322 = fieldWeight in 6704, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=6704)
      0.05882353 = coord(1/17)
    
    Abstract
    The Naval Postgraduate School (NPS) is a military university educating officers from the United States and 40 foreign countries. To investigate the NPS information environment a large study obtained data on the range of information needs and behaviors of NPS personnel. The specific aim of the study was to supply organizational units with qualitative data specific to their client base, enabling them to improve campus systems and information services. Facilitators from the NPS Organizational Support Division conducted eighteen (18) focus groups during Spring Quarter 1998. Transcribed focus group sessions were analyzed using NUDIST software to identify key issues and results emerging from the data set. Categories of participants' information needs were identified, including an analysis of key information issues across the NPS campus. Use of Internet resources, other trusted individuals, and electronic indexes and abstracts ranked high among information sources used by NPS personnel. A picture emerges of a campus information environment poorly understood by the academic community. The three groups (students, staff and faculty) articulated different concerns and look to different sources to satisfy their information needs. Participants' information seeking problems centered on: (1) housing, registration and scheduling, computing and the quality of information available on the campus computer network, (2) an inability to easily disseminate information quickly to an appropriate campus audience, and (3) training in new information access technologies, and (4) the general lack of awareness of library resources and services. The paper discusses a method for more effectively disseminating information throughout the campus. Implications for the development of information seeking models and a model of the NPS information environment are discussed
  11. Blair, D.C.: ¬The challenge of commercial document retrieval : Part I: Major issues, and a framework based on search exhaustivity, determinacy of representation and document collection size (2002) 0.00
    1.2009465E-4 = product of:
      0.002041609 = sum of:
        0.002041609 = weight(_text_:in in 2580) [ClassicSimilarity], result of:
          0.002041609 = score(doc=2580,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.060115322 = fieldWeight in 2580, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=2580)
      0.05882353 = coord(1/17)
    
    Abstract
    With the growing focus on what is collectively known as "knowledge management", a shift continues to take place in commercial information system development: a shift away from the well-understood data retrieval/database model, to the more complex and challenging development of commercial document/information retrieval models. While document retrieval has had a long and rich legacy of research, its impact on commercial applications has been modest. At the enterprise level most large organizations have little understanding of, or commitment to, high quality document access and management. Part of the reason for this is that we still do not have a good framework for understanding the major factors which affect the performance of large-scale corporate document retrieval systems. The thesis of this discussion is that document retrieval - specifically, access to intellectual content - is a complex process which is most strongly influenced by three factors: the size of the document collection; the type of search (exhaustive, existence or sample); and, the determinacy of document representation. Collectively, these factors can be used to provide a useful framework for, or taxonomy of, document retrieval, and highlight some of the fundamental issues facing the design and development of commercial document retrieval systems. This is the first of a series of three articles. Part II (D.C. Blair, The challenge of commercial document retrieval. Part II. A strategy for document searching based on identifiable document partitions, Information Processing and Management, 2001b, this issue) will discuss the implications of this framework for search strategy, and Part III (D.C. Blair, Some thoughts on the reported results of Text REtrieval Conference (TREC), Information Processing and Management, 2002, forthcoming) will consider the importance of the TREC results for our understanding of operating information retrieval systems.
  12. Tredinnick, L.: Why Intranets fail (and how to fix them) : a practical guide for information professionals (2004) 0.00
    1.2009465E-4 = product of:
      0.002041609 = sum of:
        0.002041609 = weight(_text_:in in 4499) [ClassicSimilarity], result of:
          0.002041609 = score(doc=4499,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.060115322 = fieldWeight in 4499, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=4499)
      0.05882353 = coord(1/17)
    
    Abstract
    This book is a practical guide to some of the common problems associated with Intranets, and solutions to those problems. The book takes a unique end-user perspective an the role of intranets within organisations. It explores how the needs of the end-user very often conflict with the needs of the organisation, creatiog a confusion of purpose that impedes the success of intranet. It sets out clearly why intranets cannot be thought of as merely internal Internets, and require their own management strategies and approaches. The book draws an a wide range of examples and analogies from a variety of contexts to set-out in a clear and concise way the issues at the heart of failing intranets. It presents step-by-step solutions with universal application. Each issue discussed is accompanied by short practical suggestions for improved intranet design and architecture.
  13. Paskin, N.: DOI: current status and outlook (1999) 0.00
    1.2009465E-4 = product of:
      0.002041609 = sum of:
        0.002041609 = weight(_text_:in in 1245) [ClassicSimilarity], result of:
          0.002041609 = score(doc=1245,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.060115322 = fieldWeight in 1245, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=1245)
      0.05882353 = coord(1/17)
    
    Abstract
    Over the past few months the International DOI Foundation (IDF) has produced a number of discussion papers and other materials about the Digital Object Identifier (DOIsm) initiative. They are all available at the DOI web site, including a brief summary of the DOI origins and purpose. The aim of the present paper is to update those papers, reflecting recent progress, and to provide a summary of the current position and context of the DOI. Although much of the material presented here is the result of a consensus by the organisations forming the International DOI Foundation, some of the points discuss work in progress. The paper describes the origin of the DOI as a persistent identifier for managing copyrighted materials and its development under the non-profit International DOI Foundation into a system providing identifiers of intellectual property with a framework for open applications to be built using them. Persistent identification implementations consistent with URN specifications have up to now been hindered by lack of widespread availability of resolution mechanisms, content typology consensus, and sufficiently flexible infrastructure; DOI attempts to overcome these obstacles. Resolution of the DOI uses the Handle System®, which offers the necessary functionality for open applications. The aim of the International DOI Foundation is to promote widespread applications of the DOI, which it is doing by pioneering some early implementations and by providing an extensible framework to ensure interoperability of future DOI uses. Applications of the DOI will require an interoperable scheme of declared metadata with each DOI; the basis of the DOI metadata scheme is a minimal "kernel" of elements supplemented by additional application-specific elements, under an umbrella data model (derived from the INDECS analysis) that promotes convergence of different application metadata sets. The IDF intends to require declaration of only a minimal set of metadata, sufficient to enable unambiguous look-up of a DOI, but this must be capable of extension by others to create open applications.
  14. Chowdhury, G.: Carbon footprint of the knowledge sector : what's the future? (2010) 0.00
    1.2009465E-4 = product of:
      0.002041609 = sum of:
        0.002041609 = weight(_text_:in in 4152) [ClassicSimilarity], result of:
          0.002041609 = score(doc=4152,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.060115322 = fieldWeight in 4152, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=4152)
      0.05882353 = coord(1/17)
    
    Abstract
    Purpose - The purpose of this paper is to produce figures showing the carbon footprint of the knowledge industry - from creation to distribution and use of knowledge, and to provide comparative figures for digital distribution and access. Design/methodology/approach - An extensive literature search and environmental scan was conducted to produce data relating to the CO2 emissions from various industries and activities such as book and journal production, photocopying activities, information technology and the internet. Other sources such as the International Energy Agency (IEA), Carbon Monitoring for Action (CARMA ), Copyright Licensing Agency, UK (CLA), Copyright Agency Limited, Australia (CAL), etc., have been used to generate emission figures for production and distribution of print knowledge products versus digital distribution and access. Findings - The current practices for production and distribution of printed knowledge products generate an enormous amount of CO2. It is estimated that the book industry in the UK and USA alone produces about 1.8 million tonnes and about 11.27 million tonnes of CO2 respectively. CO2 emission for the worldwide journal publishing industry is estimated to be about 12 million tonnes. It is shown that the production and distribution costs of digital knowledge products are negligible compared to the environmental costs of production and distribution of printed knowledge products. Practical implications - Given the astounding emission figures for production and distribution of printed knowledge products, and the associated activities for access and distribution of these products, for example, emissions from photocopying activities permitted within the provisions of statutory licenses provided by agencies like CLA, CAL, etc., it is proposed that a digital distribution and access model is the way forward, and that such a system will be environmentally sustainable. Originality/value - It is expected that the findings of this study will pave the way for further research and this paper will be extremely helpful for design and development of the future knowledge distribution and access systems.
  15. Bianchini, D.; Antonellis, V. De: Linked data services and semantics-enabled mashup (2012) 0.00
    1.2009465E-4 = product of:
      0.002041609 = sum of:
        0.002041609 = weight(_text_:in in 435) [ClassicSimilarity], result of:
          0.002041609 = score(doc=435,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.060115322 = fieldWeight in 435, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=435)
      0.05882353 = coord(1/17)
    
    Abstract
    The Web of Linked Data can be seen as a global database, where resources are identified through URIs, are self-described (by means of the URI dereferencing mechanism), and are globally connected through RDF links. According to the Linked Data perspective, research attention is progressively shifting from data organization and representation to linkage and composition of the huge amount of data available on the Web. For example, at the time of this writing, the DBpedia knowledge base describes more than 3.5 million things, conceptualized through 672 million RDF triples, with 6.5 million external links into other RDF datasets. Useful applications have been provided for enabling people to browse this wealth of data, like Tabulator. Other systems have been implemented to collect, index, and provide advanced searching facilities over the Web of Linked Data, such as Watson and Sindice. Besides these applications, domain-specific systems to gather and mash up Linked Data have been proposed, like DBpedia Mobile and Revyu . corn. DBpedia Mobile is a location-aware client for the semantic Web that can be used on an iPhone and other mobile devices. Based on the current GPS position of a mobile device, DBpedia Mobile renders a map indicating nearby locations from the DBpedia dataset. Starting from this map, the user can explore background information about his or her surroundings. Revyu . corn is a Web site where you can review and rate whatever is possible to identify (through a URI) on the Web. Nevertheless, the potential advantages implicit in the Web of Linked Data are far from being fully exploited. Current applications hardly go beyond presenting together data gathered from different sources. Recently, research on the Web of Linked Data has been devoted to the study of models and languages to add functionalities to the Web of Linked Data by means of Linked Data services.
  16. Falavarjani, S.A.M.; Jovanovic, J.; Fani, H.; Ghorbani, A.A.; Noorian, Z.; Bagheri, E.: On the causal relation between real world activities and emotional expressions of social media users (2021) 0.00
    1.2009465E-4 = product of:
      0.002041609 = sum of:
        0.002041609 = weight(_text_:in in 243) [ClassicSimilarity], result of:
          0.002041609 = score(doc=243,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.060115322 = fieldWeight in 243, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=243)
      0.05882353 = coord(1/17)
    
    Abstract
    Social interactions through online social media have become a daily routine of many, and the number of those whose real world (offline) and online lives have become intertwined is continuously growing. As such, the interplay of individuals' online and offline activities has been the subject of numerous research studies, the majority of which explored the impact of people's online actions on their offline activities. The opposite direction of impact-the effect of real-world activities on online actions-has also received attention but to a lesser degree. To contribute to the latter form of impact, this paper reports on a quasi-experimental design study that examined the presence of causal relations between real-world activities of online social media users and their online emotional expressions. To this end, we have collected a large dataset (over 17K users) from Twitter and Foursquare, and systematically aligned user content on the two social media platforms. Users' Foursquare check-ins provided information about their offline activities, whereas the users' expressions of emotions and moods were derived from their Twitter posts. Since our study was based on a quasi-experimental design, to minimize the impact of covariates, we applied an innovative model of computing propensity scores. Our main findings can be summarized as follows: (a) users' offline activities do impact their affective expressions, both of emotions and moods, as evidenced in their online shared textual content; (b) the impact depends on the type of offline activity and if the user embarks on or abandons the activity. Our findings can be used to devise a personalized recommendation mechanism to help people better manage their online emotional expressions.
  17. Frey, J.; Streitmatter, D.; Götz, F.; Hellmann, S.; Arndt, N.: DBpedia Archivo (2020) 0.00
    1.0508282E-4 = product of:
      0.0017864079 = sum of:
        0.0017864079 = weight(_text_:in in 53) [ClassicSimilarity], result of:
          0.0017864079 = score(doc=53,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.052600905 = fieldWeight in 53, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02734375 = fieldNorm(doc=53)
      0.05882353 = coord(1/17)
    
    Content
    # How does Archivo work? Each week Archivo runs several discovery algorithms to scan for new ontologies. Once discovered Archivo checks them every 8 hours. When changes are detected, Archivo downloads and rates and archives the latest snapshot persistently on the DBpedia Databus. # Archivo's mission Archivo's mission is to improve FAIRness (findability, accessibility, interoperability, and reusability) of all available ontologies on the Semantic Web. Archivo is not a guideline, it is fully automated, machine-readable and enforces interoperability with its star rating. - Ontology developers can implement against Archivo until they reach more stars. The stars and tests are designed to guarantee the interoperability and fitness of the ontology. - Ontology users can better find, access and re-use ontologies. Snapshots are persisted in case the original is not reachable anymore adding a layer of reliability to the decentral web of ontologies.

Authors

Languages

Types

Themes

Subjects

Classifications