Search (5159 results, page 258 of 258)

  • × type_ss:"a"
  • × year_i:[2010 TO 2020}
  1. Ninkov, A.; Vaughan, L.: ¬A webometric analysis of the online vaccination debate (2017) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 3605) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=3605,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 3605, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3605)
      0.16666667 = coord(1/6)
    
    Abstract
    Webometrics research methods can be effectively used to measure and analyze information on the web. One topic discussed vehemently online that could benefit from this type of analysis is vaccines. We carried out a study analyzing the web presence of both sides of this debate. We collected a variety of webometric data and analyzed the data both quantitatively and qualitatively. The study found far more anti- than pro-vaccine web domains. The anti and pro sides had similar web visibility as measured by the number of links coming from general websites and Tweets. However, the links to the pro domains were of higher quality measured by PageRank scores. The result from the qualitative content analysis confirmed this finding. The analysis of site ages revealed that the battle between the two sides had a long history and is still ongoing. The web scene was polarized with either pro or anti views and little neutral ground. The study suggests ways that professional information can be promoted more effectively on the web. The study demonstrates that webometrics analysis is effective in studying online information dissemination. This kind of analysis can be used to study not only health information but other information as well.
  2. Lee, K.; Kim, S.Y.; Kim, E.H.-J.; Song, M.: Comparative evaluation of bibliometric content networks by tomographic content analysis : an application to Parkinson's disease (2017) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 3606) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=3606,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 3606, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3606)
      0.16666667 = coord(1/6)
    
    Abstract
    To understand the current state of a discipline and to discover new knowledge of a certain theme, one builds bibliometric content networks based on the present knowledge entities. However, such networks can vary according to the collection of data sets relevant to the theme by querying knowledge entities. In this study we classify three different bibliometric content networks. The primary bibliometric network is based on knowledge entities relevant to a keyword of the theme, the secondary network is based on entities associated with the lower concepts of the keyword, and the tertiary network is based on entities influenced by the theme. To explore the content and properties of these networks, we propose a tomographic content analysis that takes a slice-and-dice approach to analyzing the networks. Our findings indicate that the primary network is best suited to understanding the current knowledge on a certain topic, whereas the secondary network is good at discovering new knowledge across fields associated with the topic, and the tertiary network is appropriate for outlining the current knowledge of the topic and relevant studies.
  3. Koltay, T.: ¬The bright side of information : ways of mitigating information overload (2017) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 3709) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=3709,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 3709, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3709)
      0.16666667 = coord(1/6)
    
    Abstract
    Purpose The complex phenomenon of information overload (IO) is one of the pathologies in our present information environment, thus symbolically it signalizes the existence of a dark side of information. The purpose of this paper is to investigate the approaches on mitigating IO. Hence, it is an attempt to display the bright side. Design/methodology/approach Based on a literature review, the sources of IO are briefly presented, not forgetting about the role of information technology and the influence of the data-intensive world. The main attention is given to the possible ways of mitigating IO. Findings It is underlined that there are both technological and social approaches towards easing the symptoms of IO. While reducing IO by increasing search task delegation is a far away goal, solutions emerge when information is properly designed and tools of information architecture are applied to enable findability. A wider range of coping strategies is available when we interact with information. The imperative of being critical against information by exercising critical thinking and critical reading yields results if different, discipline-dependent literacies, first of all information literacy and data literacy are acquired and put into operation, slow principles are followed and personal information management (PIM) tools are applied. Originality/value The paper intends to be an add-on to the recent discussions and the evolving body of knowledge about the relationship between IO and information architecture, various literacies and PIM.
  4. Minkov, E.; Kahanov, K.; Kuflik, T.: Graph-based recommendation integrating rating history and domain knowledge : application to on-site guidance of museum visitors (2017) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 3756) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=3756,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 3756, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3756)
      0.16666667 = coord(1/6)
    
    Abstract
    Visitors to museums and other cultural heritage sites encounter a wealth of exhibits in a variety of subject areas, but can explore only a small number of them. Moreover, there typically exists rich complementary information that can be delivered to the visitor about exhibits of interest, but only a fraction of this information can be consumed during the limited time of the visit. Recommender systems may help visitors to cope with this information overload. Ideally, the recommender system of choice should model user preferences, as well as background knowledge about the museum's environment, considering aspects of physical and thematic relevancy. We propose a personalized graph-based recommender framework, representing rating history and background multi-facet information jointly as a relational graph. A random walk measure is applied to rank available complementary multimedia presentations by their relevancy to a visitor's profile, integrating the various dimensions. We report the results of experiments conducted using authentic data collected at the Hecht museum. An evaluation of multiple graph variants, compared with several popular and state-of-the-art recommendation methods, indicates on advantages of the graph-based approach.
  5. Kousha, K.; Thelwall, M.: News stories as evidence for research? : BBC citations from articles, Books, and Wikipedia (2017) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 3760) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=3760,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 3760, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3760)
      0.16666667 = coord(1/6)
    
    Abstract
    Although news stories target the general public and are sometimes inaccurate, they can serve as sources of real-world information for researchers. This article investigates the extent to which academics exploit journalism using content and citation analyses of online BBC News stories cited by Scopus articles. A total of 27,234 Scopus-indexed publications have cited at least one BBC News story, with a steady annual increase. Citations from the arts and humanities (2.8% of publications in 2015) and social sciences (1.5%) were more likely than citations from medicine (0.1%) and science (<0.1%). Surprisingly, half of the sampled Scopus-cited science and technology (53%) and medicine and health (47%) stories were based on academic research, rather than otherwise unpublished information, suggesting that researchers have chosen a lower-quality secondary source for their citations. Nevertheless, the BBC News stories that were most frequently cited by Scopus, Google Books, and Wikipedia introduced new information from many different topics, including politics, business, economics, statistics, and reports about events. Thus, news stories are mediating real-world knowledge into the academic domain, a potential cause for concern.
  6. Thelwall, M.; Kousha, K.: SlideShare presentations, citations, users, and trends : a professional site with academic and educational uses (2017) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 3766) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=3766,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 3766, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3766)
      0.16666667 = coord(1/6)
    
    Abstract
    SlideShare is a free social website that aims to help users distribute and find presentations. Owned by LinkedIn since 2012, it targets a professional audience but may give value to scholarship through creating a long-term record of the content of talks. This article tests this hypothesis by analyzing sets of general and scholarly related SlideShare documents using content and citation analysis and popularity statistics reported on the site. The results suggest that academics, students, and teachers are a minority of SlideShare uploaders, especially since 2010, with most documents not being directly related to scholarship or teaching. About two thirds of uploaded SlideShare documents are presentation slides, with the remainder often being files associated with presentations or video recordings of talks. SlideShare is therefore a presentation-centered site with a predominantly professional user base. Although a minority of the uploaded SlideShare documents are cited by, or cite, academic publications, probably too few articles are cited by SlideShare to consider extracting SlideShare citations for research evaluation. Nevertheless, scholars should consider SlideShare to be a potential source of academic and nonacademic information, particularly in library and information science, education, and business.
  7. Hider, P.: ¬The search value added by professional indexing to a bibliographic database (2017) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 3868) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=3868,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 3868, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3868)
      0.16666667 = coord(1/6)
    
    Content
    Beitrag bei: NASKO 2017: Visualizing Knowledge Organization: Bringing Focus to Abstract Realities. The sixth North American Symposium on Knowledge Organization (NASKO 2017), June 15-16, 2017, in Champaign, IL, USA.
  8. Saltz, J.; Shamshurin, I.; Connors, C.: Predicting data science sociotechnical execution challenges by categorizing data science projects (2017) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 3960) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=3960,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 3960, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3960)
      0.16666667 = coord(1/6)
    
    Abstract
    The challenge in executing a data science project is more than just identifying the best algorithm and tool set to use. Additional sociotechnical challenges include items such as how to define the project goals and how to ensure the project is effectively managed. This paper reports on a set of case studies where researchers were embedded within data science teams and where the researcher observations and analysis was focused on the attributes that can help describe data science projects and the challenges faced by the teams executing these projects, as opposed to the algorithms and technologies that were used to perform the analytics. Based on our case studies, we identified 14 characteristics that can help describe a data science project. We then used these characteristics to create a model that defines two key dimensions of the project. Finally, by clustering the projects within these two dimensions, we identified four types of data science projects, and based on the type of project, we identified some of the sociotechnical challenges that project teams should expect to encounter when executing data science projects.
  9. Kim, Y.; Yoon, A.: Scientists' data reuse behaviors : a multilevel analysis (2017) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 3964) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=3964,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 3964, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3964)
      0.16666667 = coord(1/6)
    
    Abstract
    This study explores the factors that influence the data reuse behaviors of scientists and identifies the generalized patterns that occur in data reuse across various disciplines. This research employed an integrated theoretical framework combining institutional theory and the theory of planned behavior. The combined theoretical framework can apply the institutional theory at the individual level and extend the theory of planned behavior by including relevant contexts. This study utilized a survey method to test the proposed research model and hypotheses. Study participants were recruited from the Community of Science's (CoS) Scholar Database, and a total of 1,528 scientists responded to the survey. A multilevel analysis method was used to analyze the 1,237 qualified responses. This research showed that scientists' data reuse intentions are influenced by both disciplinary level factors (availability of data repositories) and individual level factors (perceived usefulness, perceived concern, and the availability of internal resources). This study has practical implications for promoting data reuse practices. Three main areas that need to be improved are identified: Educating scientists, providing internal supports, and providing external resources and supports such as data repositories.
  10. Sande, M. Vander; Verborgh, R.; Hochstenbach, P.; Van de Sompel, H.: Toward sustainable publishing and querying of distributed Linked Data archives (2018) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 4195) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=4195,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 4195, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4195)
      0.16666667 = coord(1/6)
    
    Abstract
    Purpose The purpose of this paper is to detail a low-cost, low-maintenance publishing strategy aimed at unlocking the value of Linked Data collections held by libraries, archives and museums (LAMs). Design/methodology/approach The shortcomings of commonly used Linked Data publishing approaches are identified, and the current lack of substantial collections of Linked Data exposed by LAMs is considered. To improve on the discussed status quo, a novel approach for publishing Linked Data is proposed and demonstrated by means of an archive of DBpedia versions, which is queried in combination with other Linked Data sources. Findings The authors show that the approach makes publishing Linked Data archives easy and affordable, and supports distributed querying without causing untenable load on the Linked Data sources. Research limitations/implications The proposed approach significantly lowers the barrier for publishing, maintaining, and making Linked Data collections queryable. As such, it offers the potential to substantially grow the distributed network of queryable Linked Data sources. Because the approach supports querying without causing unacceptable load on the sources, the queryable interfaces are expected to be more reliable, allowing them to become integral building blocks of robust applications that leverage distributed Linked Data sources. Originality/value The novel publishing strategy significantly lowers the technical and financial barriers that LAMs face when attempting to publish Linked Data collections. The proposed approach yields Linked Data sources that can reliably be queried, paving the way for applications that leverage distributed Linked Data sources through federated querying.
  11. Zhitomirsky-Geffet, M.; Bar-Ilan, J.; Levene, M.: Categorical relevance judgment (2018) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 4457) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=4457,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 4457, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4457)
      0.16666667 = coord(1/6)
    
    Abstract
    In this study we aim to explore users' behavior when assessing search results relevance based on the hypothesis of categorical thinking. To investigate how users categories search engine results, we perform several experiments where users are asked to group a list of 20 search results into several categories, while attaching a relevance judgment to each formed category. Moreover, to determine how users change their minds over time, each experiment was repeated three times under the same conditions, with a gap of one month between rounds. The results show that on average users form 4-5 categories. Within each round the size of a category decreases with the relevance of a category. To measure the agreement between the search engine's ranking and the users' relevance judgments, we defined two novel similarity measures, the average concordance and the MinMax swap ratio. Similarity is shown to be the highest for the third round as the users' opinion stabilizes. Qualitative analysis uncovered some interesting points that users tended to categories results by type and reliability of their source, and particularly, found commercial sites less trustworthy, and attached high relevance to Wikipedia when their prior domain knowledge was limited.
  12. Lee, M.; Butler, B.S.: How are information deserts created? : a theory of local information landscapes (2019) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 4680) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=4680,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 4680, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4680)
      0.16666667 = coord(1/6)
    
    Abstract
    To understand information accessibility issues, research has examined human and technical factors by taking a socio-technical view. While this view provides a profound understanding of how people seek, use, and access information, it often overlooks the larger structure of the information landscapes that shape people's information access. However, theorizing the information landscape of a local community at the community level is challenging because of the diverse contexts and users. One way to minimize the complexity is to focus on the materiality of information. By highlighting the material aspects of information, it becomes possible to understand the community-level structure of local information. This paper develops a theory of local information landscapes (LIL theory) to conceptualize the material structure of local information. LIL theory adapts a concept of the virtual as an ontological view of the local information that is embedded in technical infrastructures, spaces, and people. By complementing existing theories, this paper provides a new perspective on how information deserts manifest as a material pre-condition of information inequality. Based on these theoretical models, a research agenda is presented for future studies of local communities.
  13. Yaco, S.; Ramaprasad, A.: Informatics for cultural heritage instruction : an ontological framework (2019) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 5029) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=5029,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 5029, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5029)
      0.16666667 = coord(1/6)
    
    Abstract
    Purpose The purpose of this paper is to suggest a framework that creates a common language to enhance the connection between the domains of cultural heritage (CH) artifacts and instruction. Design/methodology/approach The CH and instruction domains are logically deconstructed into dimensions of functions, semiotics, CH, teaching/instructional materials, agents and outcomes. The elements within those dimensions can be concatenated to create natural-English sentences that describe aspects of the problem domain. Findings The framework is valid using traditional social sciences content, semantic, practical and systemic validity constructs. Research limitations/implications The framework can be used to map current research literature to discover areas of heavy, light and no research. Originality/value The framework provides a new way for CH and education stakeholders to describe and visualize the problem domain, which could allow for significant enhancements of each. Better understanding the problem domain would serve to enhance instruction informed from collections and vice versa. The educational process would have more depth due to better access to primary sources. Increased use of collections would reveal more ways through which they could be used in instruction. The framework can help visualize the past and present of the domain, and envisage its future.
  14. Leydesdorff, L.; Bornmann, L.; Mingers, J.: Statistical significance and effect sizes of differences among research universities at the level of nations and worldwide based on the Leiden rankings (2019) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 5225) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=5225,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 5225, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5225)
      0.16666667 = coord(1/6)
    
    Abstract
    The Leiden Rankings can be used for grouping research universities by considering universities which are not statistically significantly different as homogeneous sets. The groups and intergroup relations can be analyzed and visualized using tools from network analysis. Using the so-called "excellence indicator" PPtop-10%-the proportion of the top-10% most-highly-cited papers assigned to a university-we pursue a classification using (a) overlapping stability intervals, (b) statistical-significance tests, and (c) effect sizes of differences among 902 universities in 54 countries; we focus on the UK, Germany, Brazil, and the USA as national examples. Although the groupings remain largely the same using different statistical significance levels or overlapping stability intervals, these classifications are uncorrelated with those based on effect sizes. Effect sizes for the differences between universities are small (w < .2). The more detailed analysis of universities at the country level suggests that distinctions beyond three or perhaps four groups of universities (high, middle, low) may not be meaningful. Given similar institutional incentives, isomorphism within each eco-system of universities should not be underestimated. Our results suggest that networks based on overlapping stability intervals can provide a first impression of the relevant groupings among universities. However, the clusters are not well-defined divisions between groups of universities.
  15. Kim, H.H.; Kim, Y.H.: ERP/MMR algorithm for classifying topic-relevant and topic-irrelevant visual shots of documentary videos (2019) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 5358) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=5358,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 5358, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5358)
      0.16666667 = coord(1/6)
    
    Footnote
    Beitrag in einem 'Special issue on neuro-information science'.
  16. Savoy, J.: Authorship of Pauline epistles revisited (2019) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 5386) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=5386,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 5386, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5386)
      0.16666667 = coord(1/6)
    
    Abstract
    The name Paul appears in 13 epistles, but is he the real author? According to different biblical scholars, the number of letters really attributed to Paul varies from 4 to 13, with a majority agreeing on seven. This article proposes to revisit this authorship attribution problem by considering two effective methods (Burrows' Delta, Labbé's intertextual distance). Based on these results, a hierarchical clustering is then applied showing that four clusters can be derived, namely: {Colossians-Ephesians}, {1 and 2 Thessalonians}, {Titus, 1 and 2 Timothy}, and {Romans, Galatians, 1 and 2 Corinthians}. Moreover, a verification method based on the impostors' strategy indicates clearly that the group {Colossians-Ephesians} is written by the same author who seems not to be Paul. The same conclusion can be found for the cluster {Titus, 1 and 2 Timothy}. The Letter to Philemon stays as a singleton, without any close stylistic relationship with the other epistles. Finally, a group of four letters {Romans, Galatians, 1 and 2 Corinthians} is certainly written by the same author (Paul), but the verification protocol also indicates that 2 Corinthians is related to 1 Thessalonians, rendering a clear and simple interpretation difficult.
  17. Faniel, I.M.; Frank, R.D.; Yakel, E.: Context from the data reuser's point of view (2019) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 5469) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=5469,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 5469, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5469)
      0.16666667 = coord(1/6)
    
    Abstract
    Purpose Taking the researchers' perspective, the purpose of this paper is to examine the types of context information needed to preserve data's meaning in ways that support data reuse. Design/methodology/approach This paper is based on a qualitative study of 105 researchers from three disciplinary communities: quantitative social science, archaeology and zoology. The study focused on researchers' most recent data reuse experience, particularly what they needed when deciding whether to reuse data. Findings Findings show that researchers mentioned 12 types of context information across three broad categories: data production information (data collection, specimen and artifact, data producer, data analysis, missing data, and research objectives); repository information (provenance, reputation and history, curation and digitization); and data reuse information (prior reuse, advice on reuse and terms of use). Originality/value This paper extends digital curation conversations to include the preservation of context as well as content to facilitate data reuse. When compared to prior research, findings show that there is some generalizability with respect to the types of context needed across different disciplines and data sharing and reuse environments. It also introduces several new context types. Relying on the perspective of researchers offers a more nuanced view that shows the importance of the different context types for each discipline and the ways disciplinary members thought about them. Both data producers and curators can benefit from knowing what to capture and manage during data collection and deposit into a repository.
  18. Tarulli, L.; Spiteri, L.F.: Library catalogues of the future : a social space and collaborative tool? (2012) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 5565) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=5565,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 5565, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5565)
      0.16666667 = coord(1/6)
    
    Abstract
    Next-generation catalogues are providing opportunities for library professionals and users to interact, collaborate, and enhance core library functions. Technology, innovation, and creativity are all components that are merging to create a localized, online social space that brings our physical library services and experiences into an online environment. While patrons are comfortable creating user-generated information on commercial Web sites and social media Web sites, library professionals should be exploring alternative methods of use for these tools within the library setting. Can the library catalogue promote remote readers' advisory services and act as a localized "Google"? Will patrons or library professionals be the driving force behind user-generated content within our catalogues? How can cataloguers be sure that the integrity of their bibliographic records is protected while inviting additional data sources to display in our catalogues? As library catalogues bring our physical library services into the online environment, catalogues also begin to encroach or "mash-up" with other areas of librarianship that have not been part of a cataloguer's expertise. Using library catalogues beyond their traditional role as tools for discovery and access raises issues surrounding the expertise of library professionals and the benefits of collaboration between frontline and backroom staff.
  19. Chowdhury, G.: Carbon footprint of the knowledge sector : what's the future? (2010) 0.00
    5.949487E-4 = product of:
      0.0035696921 = sum of:
        0.0035696921 = weight(_text_:in in 4152) [ClassicSimilarity], result of:
          0.0035696921 = score(doc=4152,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.060115322 = fieldWeight in 4152, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=4152)
      0.16666667 = coord(1/6)
    
    Abstract
    Purpose - The purpose of this paper is to produce figures showing the carbon footprint of the knowledge industry - from creation to distribution and use of knowledge, and to provide comparative figures for digital distribution and access. Design/methodology/approach - An extensive literature search and environmental scan was conducted to produce data relating to the CO2 emissions from various industries and activities such as book and journal production, photocopying activities, information technology and the internet. Other sources such as the International Energy Agency (IEA), Carbon Monitoring for Action (CARMA ), Copyright Licensing Agency, UK (CLA), Copyright Agency Limited, Australia (CAL), etc., have been used to generate emission figures for production and distribution of print knowledge products versus digital distribution and access. Findings - The current practices for production and distribution of printed knowledge products generate an enormous amount of CO2. It is estimated that the book industry in the UK and USA alone produces about 1.8 million tonnes and about 11.27 million tonnes of CO2 respectively. CO2 emission for the worldwide journal publishing industry is estimated to be about 12 million tonnes. It is shown that the production and distribution costs of digital knowledge products are negligible compared to the environmental costs of production and distribution of printed knowledge products. Practical implications - Given the astounding emission figures for production and distribution of printed knowledge products, and the associated activities for access and distribution of these products, for example, emissions from photocopying activities permitted within the provisions of statutory licenses provided by agencies like CLA, CAL, etc., it is proposed that a digital distribution and access model is the way forward, and that such a system will be environmentally sustainable. Originality/value - It is expected that the findings of this study will pave the way for further research and this paper will be extremely helpful for design and development of the future knowledge distribution and access systems.

Languages

  • e 3856
  • d 1272
  • i 6
  • f 2
  • a 1
  • el 1
  • es 1
  • sp 1
  • More… Less…

Types

  • el 473
  • b 5
  • s 1
  • x 1
  • More… Less…

Themes