Search (101 results, page 1 of 6)

  • × language_ss:"e"
  • × year_i:[2020 TO 2030}
  1. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.14
    0.14457956 = product of:
      0.28915912 = sum of:
        0.07228978 = product of:
          0.21686934 = sum of:
            0.21686934 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.21686934 = score(doc=862,freq=2.0), product of:
                0.38587612 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.045514934 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.21686934 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.21686934 = score(doc=862,freq=2.0), product of:
            0.38587612 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.045514934 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.5 = coord(2/4)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  2. Dijk, J: ¬The digital divide (2020) 0.04
    0.039819542 = product of:
      0.15927817 = sum of:
        0.15927817 = weight(_text_:soziale in 68) [ClassicSimilarity], result of:
          0.15927817 = score(doc=68,freq=4.0), product of:
            0.2780798 = queryWeight, product of:
              6.1096387 = idf(docFreq=266, maxDocs=44218)
              0.045514934 = queryNorm
            0.57277864 = fieldWeight in 68, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.1096387 = idf(docFreq=266, maxDocs=44218)
              0.046875 = fieldNorm(doc=68)
      0.25 = coord(1/4)
    
    RSWK
    Soziale Ungleichheit
    Subject
    Soziale Ungleichheit
  3. Bergman, O.; Israeli, T.; Whittaker, S.: Factors hindering shared files retrieval (2020) 0.02
    0.017601274 = product of:
      0.070405096 = sum of:
        0.070405096 = sum of:
          0.03957187 = weight(_text_:software in 5843) [ClassicSimilarity], result of:
            0.03957187 = score(doc=5843,freq=2.0), product of:
              0.18056466 = queryWeight, product of:
                3.9671519 = idf(docFreq=2274, maxDocs=44218)
                0.045514934 = queryNorm
              0.21915624 = fieldWeight in 5843, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.9671519 = idf(docFreq=2274, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5843)
          0.030833228 = weight(_text_:22 in 5843) [ClassicSimilarity], result of:
            0.030833228 = score(doc=5843,freq=2.0), product of:
              0.15938555 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045514934 = queryNorm
              0.19345059 = fieldWeight in 5843, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5843)
      0.25 = coord(1/4)
    
    Abstract
    Purpose Personal information management (PIM) is an activity in which people store information items in order to retrieve them later. The purpose of this paper is to test and quantify the effect of factors related to collection size, file properties and workload on file retrieval success and efficiency. Design/methodology/approach In the study, 289 participants retrieved 1,557 of their shared files in a naturalistic setting. The study used specially developed software designed to collect shared files' names and present them as targets for the retrieval task. The dependent variables were retrieval success, retrieval time and misstep/s. Findings Various factors compromise shared files retrieval including: collection size (large number of files), file properties (multiple versions, size of team sharing the file, time since most recent retrieval and folder depth) and workload (daily e-mails sent and received). The authors discuss theoretical reasons for these negative effects and suggest possible ways to overcome them. Originality/value Retrieval is the main reason people manage personal information. It is essential for retrieval to be successful and efficient, as information cannot be used unless it can be re-accessed. Prior PIM research has assumed that factors related to collection size, file properties and workload affect file retrieval. However, this is the first study to systematically quantify the negative effects of these factors. As each of these factors is expected to be exacerbated in the future, this study is a necessary first step toward addressing these problems.
    Date
    20. 1.2015 18:30:22
  4. Chassanoff, A.; Altman, M.: Curation as "Interoperability With the Future" : preserving scholarly research software in academic libraries (2020) 0.01
    0.014539635 = product of:
      0.05815854 = sum of:
        0.05815854 = product of:
          0.11631708 = sum of:
            0.11631708 = weight(_text_:software in 5673) [ClassicSimilarity], result of:
              0.11631708 = score(doc=5673,freq=12.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.6441852 = fieldWeight in 5673, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5673)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This article considers the problem of preserving research software within the wider realm of digital curation, academic research libraries, and the scholarly record. We conducted a pilot study to understand the ecosystem in which research software participates, and to identify significant characteristics that have high potential to support future scholarly practices. A set of topical curation dimensions were derived from the extant literature and applied to select cases of institutionally significant research software. This approach yields our main contribution, a curation model and decision framework for preserving research software as a scholarly object. The results of our study highlight the unique characteristics and challenges at play in building curation services in academic research libraries.
    Form
    Software
  5. Du, C.; Cohoon, J.; Lopez, P.; Howison, J.: Softcite dataset : a dataset of software mentions in biomedical and economic research publications (2021) 0.01
    0.01327281 = product of:
      0.05309124 = sum of:
        0.05309124 = product of:
          0.10618248 = sum of:
            0.10618248 = weight(_text_:software in 262) [ClassicSimilarity], result of:
              0.10618248 = score(doc=262,freq=10.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.58805794 = fieldWeight in 262, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.046875 = fieldNorm(doc=262)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Software contributions to academic research are relatively invisible, especially to the formalized scholarly reputation system based on bibliometrics. In this article, we introduce a gold-standard dataset of software mentions from the manual annotation of 4,971 academic PDFs in biomedicine and economics. The dataset is intended to be used for automatic extraction of software mentions from PDF format research publications by supervised learning at scale. We provide a description of the dataset and an extended discussion of its creation process, including improved text conversion of academic PDFs. Finally, we reflect on our challenges and lessons learned during the dataset creation, in hope of encouraging more discussion about creating datasets for machine learning use.
    Form
    Software
  6. Acker, A.: Emulation practices for software preservation in libraries, archives, and museums (2021) 0.01
    0.0130871665 = product of:
      0.052348666 = sum of:
        0.052348666 = product of:
          0.10469733 = sum of:
            0.10469733 = weight(_text_:software in 334) [ClassicSimilarity], result of:
              0.10469733 = score(doc=334,freq=14.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.5798329 = fieldWeight in 334, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=334)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Emulation practices are computational, technical processes that allow for one system to reproduce the functions and results of another. This article reports on findings from research following three small teams of information professionals as they implemented emulation practices into their digital preservation programs at a technology museum, a university research library, and a university research archive and technology lab. Results suggest that the distributed teams in this cohort of preservationists have developed different emulation practices for particular kinds of "emulation encounters" in supporting different types of access. I discuss the implications of these findings for digital preservation research and emulation initiatives providing access to software or software-dependent objects, showing how implications of these findings have significance for those developing software preservation workflows and building emulation capacities. These findings suggest that different emulation practices for preservation, research access, and exhibition undertaken in libraries, archives, and museums result in different forms of access to preserved software-accessing information and experiential access. In examining particular types of access, this research calls into question software emulation as a single, static preservation strategy for information institutions and challenges researchers to examine new forms of access and descriptive representation emerging from these digital preservation strategies.
    Form
    Software
  7. Gomez, J.; Allen, K.; Matney, M.; Awopetu, T.; Shafer, S.: Experimenting with a machine generated annotations pipeline (2020) 0.01
    0.011192616 = product of:
      0.044770464 = sum of:
        0.044770464 = product of:
          0.08954093 = sum of:
            0.08954093 = weight(_text_:software in 657) [ClassicSimilarity], result of:
              0.08954093 = score(doc=657,freq=4.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.49589399 = fieldWeight in 657, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.0625 = fieldNorm(doc=657)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The UCLA Library reorganized its software developers into focused subteams with one, the Labs Team, dedicated to conducting experiments. In this article we describe our first attempt at conducting a software development experiment, in which we attempted to improve our digital library's search results with metadata from cloud-based image tagging services. We explore the findings and discuss the lessons learned from our first attempt at running an experiment.
  8. Rozas, D.; Huckle, S.: Loosen control without losing control : formalization and decentralization within commons-based peer production (2021) 0.01
    0.009793539 = product of:
      0.039174154 = sum of:
        0.039174154 = product of:
          0.07834831 = sum of:
            0.07834831 = weight(_text_:software in 91) [ClassicSimilarity], result of:
              0.07834831 = score(doc=91,freq=4.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.43390724 = fieldWeight in 91, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=91)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This study considers commons-based peer production (CBPP) by examining the organizational processes of the free/libre open-source software community, Drupal. It does so by exploring the sociotechnical systems that have emerged around both Drupal's development and its face-to-face communitarian events. There has been criticism of the simplistic nature of previous research into free software; this study addresses this by linking studies of CBPP with a qualitative study of Drupal's organizational processes. It focuses on the evolution of organizational structures, identifying the intertwined dynamics of formalization and decentralization, resulting in coexisting sociotechnical systems that vary in their degrees of organicity.
  9. Yang, X.; Li, X.; Hu, D.; Wang, H.J.: Differential impacts of social influence on initial and sustained participation in open source software projects (2021) 0.01
    0.008567562 = product of:
      0.03427025 = sum of:
        0.03427025 = product of:
          0.0685405 = sum of:
            0.0685405 = weight(_text_:software in 332) [ClassicSimilarity], result of:
              0.0685405 = score(doc=332,freq=6.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.37958977 = fieldWeight in 332, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=332)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Social networking tools and visible information about developer activities on open source software (OSS) development platforms can leverage developers' social influence to attract more participation from their peers. However, the differential impacts of such social influence on developers' initial and sustained participation behaviors were largely overlooked in previous research. We empirically studied the impacts of two social influence mechanisms-word-of-mouth (WOM) and observational learning (OL)-on these two types of participation, using data collected from a large OSS development platform called Open Hub. We found that action (OL) speaks louder than words (WOM) with regard to sustained participation. Moreover, project age positively moderates the impacts of social influence on both types of participation. For projects with a higher average workload, the impacts of OL are reduced on initial participation but are increased on sustained participation. Our study provides a better understanding of how social influence affects OSS developers' participation behaviors. It also offers important practical implications for designing software development platforms that can leverage social influence to attract more initial and sustained participation.
  10. Breuer, T.; Tavakolpoursaleh, N.; Schaer, P.; Hienert, D.; Schaible, J.; Castro, L.J.: Online Information Retrieval Evaluation using the STELLA Framework (2022) 0.01
    0.008394462 = product of:
      0.03357785 = sum of:
        0.03357785 = product of:
          0.0671557 = sum of:
            0.0671557 = weight(_text_:software in 640) [ClassicSimilarity], result of:
              0.0671557 = score(doc=640,freq=4.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.3719205 = fieldWeight in 640, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.046875 = fieldNorm(doc=640)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Involving users in early phases of software development has become a common strategy as it enables developers to consider user needs from the beginning. Once a system is in production, new opportunities to observe, evaluate and learn from users emerge as more information becomes available. Gathering information from users to continuously evaluate their behavior is a common practice for commercial software, while the Cranfield paradigm remains the preferred option for Information Retrieval (IR) and recommendation systems in the academic world. Here we introduce the Infrastructures for Living Labs STELLA project which aims to create an evaluation infrastructure allowing experimental systems to run along production web-based academic search systems with real users. STELLA combines user interactions and log files analyses to enable large-scale A/B experiments for academic search.
  11. DuBose, J.: Cataloging virtual reality rrograms : making the future searchable (2024) 0.01
    0.007914375 = product of:
      0.0316575 = sum of:
        0.0316575 = product of:
          0.063315 = sum of:
            0.063315 = weight(_text_:software in 1155) [ClassicSimilarity], result of:
              0.063315 = score(doc=1155,freq=2.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.35064998 = fieldWeight in 1155, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1155)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Form
    Software
  12. Fugmann, R.: What is information? : an information veteran looks back (2022) 0.01
    0.007708307 = product of:
      0.030833228 = sum of:
        0.030833228 = product of:
          0.061666455 = sum of:
            0.061666455 = weight(_text_:22 in 1085) [ClassicSimilarity], result of:
              0.061666455 = score(doc=1085,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.38690117 = fieldWeight in 1085, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1085)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    18. 8.2022 19:22:57
  13. Ahmed, M.; Mukhopadhyay, M.; Mukhopadhyay, P.: Automated knowledge organization : AI ML based subject indexing system for libraries (2023) 0.01
    0.006995385 = product of:
      0.02798154 = sum of:
        0.02798154 = product of:
          0.05596308 = sum of:
            0.05596308 = weight(_text_:software in 977) [ClassicSimilarity], result of:
              0.05596308 = score(doc=977,freq=4.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.30993375 = fieldWeight in 977, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=977)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The research study as reported here is an attempt to explore the possibilities of an AI/ML-based semi-automated indexing system in a library setup to handle large volumes of documents. It uses the Python virtual environment to install and configure an open source AI environment (named Annif) to feed the LOD (Linked Open Data) dataset of Library of Congress Subject Headings (LCSH) as a standard KOS (Knowledge Organisation System). The framework deployed the Turtle format of LCSH after cleaning the file with Skosify, applied an array of backend algorithms (namely TF-IDF, Omikuji, and NN-Ensemble) to measure relative performance, and selected Snowball as an analyser. The training of Annif was conducted with a large set of bibliographic records populated with subject descriptors (MARC tag 650$a) and indexed by trained LIS professionals. The training dataset is first treated with MarcEdit to export it in a format suitable for OpenRefine, and then in OpenRefine it undergoes many steps to produce a bibliographic record set suitable to train Annif. The framework, after training, has been tested with a bibliographic dataset to measure indexing efficiencies, and finally, the automated indexing framework is integrated with data wrangling software (OpenRefine) to produce suggested headings on a mass scale. The entire framework is based on open-source software, open datasets, and open standards.
  14. Williams, B.: Dimensions & VOSViewer bibliometrics in the reference interview (2020) 0.01
    0.006925077 = product of:
      0.027700309 = sum of:
        0.027700309 = product of:
          0.055400617 = sum of:
            0.055400617 = weight(_text_:software in 5719) [ClassicSimilarity], result of:
              0.055400617 = score(doc=5719,freq=2.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.30681872 = fieldWeight in 5719, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5719)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The VOSviewer software provides easy access to bibliometric mapping using data from Dimensions, Scopus and Web of Science. The properly formatted and structured citation data, and the ease in which it can be exported open up new avenues for use during citation searches and eference interviews. This paper details specific techniques for using advanced searches in Dimensions, exporting the citation data, and drawing insights from the maps produced in VOS Viewer. These search techniques and data export practices are fast and accurate enough to build into reference interviews for graduate students, faculty, and post-PhD researchers. The search results derived from them are accurate and allow a more comprehensive view of citation networks embedded in ordinary complex boolean searches.
  15. Dunsire, G.; Fritz, D.; Fritz, R.: Instructions, interfaces, and interoperable data : the RIMMF experience with RDA revisited (2020) 0.01
    0.006925077 = product of:
      0.027700309 = sum of:
        0.027700309 = product of:
          0.055400617 = sum of:
            0.055400617 = weight(_text_:software in 5751) [ClassicSimilarity], result of:
              0.055400617 = score(doc=5751,freq=2.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.30681872 = fieldWeight in 5751, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5751)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This article presents a case study of RIMMF, a software tool developed to improve the orientation and training of catalogers who use Resource Description and Access (RDA) to maintain bibliographic data. The cataloging guidance and instructions of RDA are based on the Functional Requirements conceptual models that are now consolidated in the IFLA Library Reference Model, but many catalogers are applying RDA in systems that have evolved from inventory and text-processing applications developed from older metadata paradigms. The article describes how RIMMF interacts with the RDA Toolkit and RDA Registry to offer cataloger-friendly multilingual data input and editing interfaces.
  16. Hahn, J.: Semi-automated methods for BIBFRAME work entity description (2021) 0.01
    0.006925077 = product of:
      0.027700309 = sum of:
        0.027700309 = product of:
          0.055400617 = sum of:
            0.055400617 = weight(_text_:software in 725) [ClassicSimilarity], result of:
              0.055400617 = score(doc=725,freq=2.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.30681872 = fieldWeight in 725, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=725)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This paper reports an investigation of machine learning methods for the semi-automated creation of a BIBFRAME Work entity description within the RDF linked data editor Sinopia (https://sinopia.io). The automated subject indexing software Annif was configured with the Library of Congress Subject Headings (LCSH) vocabulary from the Linked Data Service at https://id.loc.gov/. The training corpus was comprised of 9.3 million titles and LCSH linked data references from the IvyPlus POD project (https://pod.stanford.edu/) and from Share-VDE (https://wiki.share-vde.org). Semi-automated processes were explored to support and extend, not replace, professional expertise.
  17. Morris, V.: Automated language identification of bibliographic resources (2020) 0.01
    0.0061666453 = product of:
      0.024666581 = sum of:
        0.024666581 = product of:
          0.049333163 = sum of:
            0.049333163 = weight(_text_:22 in 5749) [ClassicSimilarity], result of:
              0.049333163 = score(doc=5749,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.30952093 = fieldWeight in 5749, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5749)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    2. 3.2020 19:04:22
  18. Geras, A.; Siudem, G.; Gagolewski, M.: Time to vote : temporal clustering of user activity on Stack Overflow (2022) 0.01
    0.0059357807 = product of:
      0.023743123 = sum of:
        0.023743123 = product of:
          0.047486246 = sum of:
            0.047486246 = weight(_text_:software in 765) [ClassicSimilarity], result of:
              0.047486246 = score(doc=765,freq=2.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.2629875 = fieldWeight in 765, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.046875 = fieldNorm(doc=765)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Question-and-answer (Q&A) sites improve access to information and ease transfer of knowledge. In recent years, they have grown in popularity and importance, enabling research on behavioral patterns of their users. We study the dynamics related to the casting of 7 M votes across a sample of 700 k posts on Stack Overflow, a large community of professional software developers. We employ log-Gaussian mixture modeling and Markov chains to formulate a simple yet elegant description of the considered phenomena. We indicate that the interevent times can naturally be clustered into 3 typical time scales: those which occur within hours, weeks, and months and show how the events become rarer and rarer as time passes. It turns out that the posts' popularity in a short period after publication is a weak predictor of its overall success, contrary to what was observed, for example, in case of YouTube clips. Nonetheless, the sleeping beauties sometimes awake and can receive bursts of votes following each other relatively quickly.
  19. Alipour, O.; Soheili, F.; Khasseh, A.A.: ¬A co-word analysis of global research on knowledge organization: 1900-2019 (2022) 0.01
    0.005596308 = product of:
      0.022385232 = sum of:
        0.022385232 = product of:
          0.044770464 = sum of:
            0.044770464 = weight(_text_:software in 1106) [ClassicSimilarity], result of:
              0.044770464 = score(doc=1106,freq=4.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.24794699 = fieldWeight in 1106, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1106)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The study's objective is to analyze the structure of knowledge organization studies conducted worldwide. This applied research has been conducted with a scientometrics approach using the co-word analysis. The research records consisted of all articles published in the journals of Knowledge Organization and Cataloging & Classification Quarterly and keywords related to the field of knowledge organization indexed in Web of Science from 1900 to 2019, in which 17,950 records were analyzed entirely with plain text format. The total number of keywords was 25,480, which was reduced to 12,478 keywords after modifications and removal of duplicates. Then, 115 keywords with a frequency of at least 18 were included in the final analysis, and finally, the co-word network was drawn. BibExcel, UCINET, VOSviewer, and SPSS software were used to draw matrices, analyze co-word networks, and draw dendrograms. Furthermore, strategic diagrams were drawn using Excel software. The keywords "information retrieval," "classification," and "ontology" are among the most frequently used keywords in knowledge organization articles. Findings revealed that "Ontology*Semantic Web", "Digital Library*Information Retrieval" and "Indexing*Information Retrieval" are highly frequent co-word pairs, respectively. The results of hierarchical clustering indicated that the global research on knowledge organization consists of eight main thematic clusters; the largest is specified for the topic of "classification, indexing, and information retrieval." The smallest clusters deal with the topics of "data processing" and "theoretical concepts of information and knowledge organization" respectively. Cluster 1 (cataloging standards and knowledge organization) has the highest density, while Cluster 5 (classification, indexing, and information retrieval) has the highest centrality. According to the findings of this research, the keyword "information retrieval" has played a significant role in knowledge organization studies, both as a keyword and co-word pair. In the co-word section, there is a type of related or general topic relationship between co-word pairs. Results indicated that information retrieval is one of the main topics in knowledge organization, while the theoretical concepts of knowledge organization have been neglected. In general, the co-word structure of knowledge organization research indicates the multiplicity of global concepts and topics studied in this field globally.
  20. Tay, A.: ¬The next generation discovery citation indexes : a review of the landscape in 2020 (2020) 0.01
    0.005395815 = product of:
      0.02158326 = sum of:
        0.02158326 = product of:
          0.04316652 = sum of:
            0.04316652 = weight(_text_:22 in 40) [ClassicSimilarity], result of:
              0.04316652 = score(doc=40,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.2708308 = fieldWeight in 40, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=40)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    17.11.2020 12:22:59

Types

  • a 93
  • el 7
  • m 4
  • p 3
  • More… Less…