Search (152 results, page 1 of 8)

  • × year_i:[2020 TO 2030}
  1. Zhang, Y.; Wu, M.; Zhang, G.; Lu, J.: Stepping beyond your comfort zone : diffusion-based network analytics for knowledge trajectory recommendation (2023) 0.07
    0.07381185 = product of:
      0.1476237 = sum of:
        0.1476237 = sum of:
          0.11276226 = weight(_text_:network in 994) [ClassicSimilarity], result of:
            0.11276226 = score(doc=994,freq=8.0), product of:
              0.22917621 = queryWeight, product of:
                4.4533744 = idf(docFreq=1398, maxDocs=44218)
                0.05146125 = queryNorm
              0.492033 = fieldWeight in 994, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                4.4533744 = idf(docFreq=1398, maxDocs=44218)
                0.0390625 = fieldNorm(doc=994)
          0.034861445 = weight(_text_:22 in 994) [ClassicSimilarity], result of:
            0.034861445 = score(doc=994,freq=2.0), product of:
              0.18020853 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05146125 = queryNorm
              0.19345059 = fieldWeight in 994, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=994)
      0.5 = coord(1/2)
    
    Abstract
    Predicting a researcher's knowledge trajectories beyond their current foci can leverage potential inter-/cross-/multi-disciplinary interactions to achieve exploratory innovation. In this study, we present a method of diffusion-based network analytics for knowledge trajectory recommendation. The method begins by constructing a heterogeneous bibliometric network consisting of a co-topic layer and a co-authorship layer. A novel link prediction approach with a diffusion strategy is then used to capture the interactions between social elements (e.g., collaboration) and knowledge elements (e.g., technological similarity) in the process of exploratory innovation. This diffusion strategy differentiates the interactions occurring among homogeneous and heterogeneous nodes in the heterogeneous bibliometric network and weights the strengths of these interactions. Two sets of experiments-one with a local dataset and the other with a global dataset-demonstrate that the proposed method is prior to 10 selected baselines in link prediction, recommender systems, and upstream graph representation learning. A case study recommending knowledge trajectories of information scientists with topical hierarchy and explainable mediators reveals the proposed method's reliability and potential practical uses in broad scenarios.
    Date
    22. 6.2023 18:07:12
  2. Milard, B.; Pitarch, Y.: Egocentric cocitation networks and scientific papers destinies (2023) 0.07
    0.06875784 = product of:
      0.13751568 = sum of:
        0.13751568 = sum of:
          0.09568194 = weight(_text_:network in 918) [ClassicSimilarity], result of:
            0.09568194 = score(doc=918,freq=4.0), product of:
              0.22917621 = queryWeight, product of:
                4.4533744 = idf(docFreq=1398, maxDocs=44218)
                0.05146125 = queryNorm
              0.41750383 = fieldWeight in 918, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.4533744 = idf(docFreq=1398, maxDocs=44218)
                0.046875 = fieldNorm(doc=918)
          0.041833732 = weight(_text_:22 in 918) [ClassicSimilarity], result of:
            0.041833732 = score(doc=918,freq=2.0), product of:
              0.18020853 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05146125 = queryNorm
              0.23214069 = fieldWeight in 918, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=918)
      0.5 = coord(1/2)
    
    Abstract
    To what extent is the destiny of a scientific paper shaped by the cocitation network in which it is involved? What are the social contexts that can explain these structuring? Using bibliometric data, interviews with researchers, and social network analysis, this article proposes a typology based on egocentric cocitation networks that displays a quadruple structuring (before and after publication): polarization, clusterization, atomization, and attrition. It shows that the academic capital of the authors and the intellectual resources of their research are key factors of these destinies, as are the social relations between the authors concerned. The circumstances of the publishing are also correlated with the structuring of the egocentric cocitation networks, showing how socially embedded they are. Finally, the article discusses the contribution of these original networks to the analyze of scientific production and its dynamics.
    Date
    21. 3.2023 19:22:14
  3. Haimson, O.L.; Carter, A.J.; Corvite, S.; Wheeler, B.; Wang, L.; Liu, T.; Lige, A.: ¬The major life events taxonomy : social readjustment, social media information sharing, and online network separation during times of life transition (2021) 0.07
    0.06625821 = product of:
      0.13251641 = sum of:
        0.13251641 = sum of:
          0.097654976 = weight(_text_:network in 263) [ClassicSimilarity], result of:
            0.097654976 = score(doc=263,freq=6.0), product of:
              0.22917621 = queryWeight, product of:
                4.4533744 = idf(docFreq=1398, maxDocs=44218)
                0.05146125 = queryNorm
              0.42611307 = fieldWeight in 263, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.4533744 = idf(docFreq=1398, maxDocs=44218)
                0.0390625 = fieldNorm(doc=263)
          0.034861445 = weight(_text_:22 in 263) [ClassicSimilarity], result of:
            0.034861445 = score(doc=263,freq=2.0), product of:
              0.18020853 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05146125 = queryNorm
              0.19345059 = fieldWeight in 263, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=263)
      0.5 = coord(1/2)
    
    Abstract
    When people experience major life changes, this often impacts their self-presentation, networks, and online behavior in substantial ways. To effectively study major life transitions and events, we surveyed a large U.S. sample (n = 554) to create the Major Life Events Taxonomy, a list of 121 life events in 12 categories. We then applied this taxonomy to a second large U.S. survey sample (n = 775) to understand on average how much social readjustment each event required, how likely each event was to be shared on social media with different types of audiences, and how much online network separation each involved. We found that social readjustment is positively correlated with sharing on social media, with both broad audiences and close ties as well as in online spaces separate from one's network of known ties. Some life transitions involve high levels of sharing with both separate audiences and broad audiences on social media, providing evidence for what previous research has called social media as social transition machinery. Researchers can use the Major Life Events Taxonomy to examine how people's life transition experiences relate to their behaviors, technology use, and health and well-being outcomes.
    Date
    10. 6.2021 19:22:47
  4. Yu, C.; Xue, H.; An, L.; Li, G.: ¬A lightweight semantic-enhanced interactive network for efficient short-text matching (2023) 0.07
    0.06625821 = product of:
      0.13251641 = sum of:
        0.13251641 = sum of:
          0.097654976 = weight(_text_:network in 890) [ClassicSimilarity], result of:
            0.097654976 = score(doc=890,freq=6.0), product of:
              0.22917621 = queryWeight, product of:
                4.4533744 = idf(docFreq=1398, maxDocs=44218)
                0.05146125 = queryNorm
              0.42611307 = fieldWeight in 890, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.4533744 = idf(docFreq=1398, maxDocs=44218)
                0.0390625 = fieldNorm(doc=890)
          0.034861445 = weight(_text_:22 in 890) [ClassicSimilarity], result of:
            0.034861445 = score(doc=890,freq=2.0), product of:
              0.18020853 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05146125 = queryNorm
              0.19345059 = fieldWeight in 890, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=890)
      0.5 = coord(1/2)
    
    Abstract
    Knowledge-enhanced short-text matching has been a significant task attracting much attention in recent years. However, the existing approaches cannot effectively balance effect and efficiency. Effective models usually consist of complex network structures leading to slow inference speed and the difficulties of applications in actual practice. In addition, most knowledge-enhanced models try to link the mentions in the text to the entities of the knowledge graphs-the difficulties of entity linking decrease the generalizability among different datasets. To address these problems, we propose a lightweight Semantic-Enhanced Interactive Network (SEIN) model for efficient short-text matching. Unlike most current research, SEIN employs an unsupervised method to select WordNet's most appropriate paraphrase description as the external semantic knowledge. It focuses on integrating semantic information and interactive information of text while simplifying the structure of other modules. We conduct intensive experiments on four real-world datasets, that is, Quora, Twitter-URL, SciTail, and SICK-E. Compared with state-of-the-art methods, SEIN achieves the best performance on most datasets. The experimental results proved that introducing external knowledge could effectively improve the performance of the short-text matching models. The research sheds light on the role of lightweight models in leveraging external knowledge to improve the effect of short-text matching.
    Date
    22. 1.2023 19:05:27
  5. Hottenrott, H.; Rose, M.E.; Lawson, C.: ¬The rise of multiple institutional affiliations in academia (2021) 0.06
    0.057298202 = product of:
      0.114596404 = sum of:
        0.114596404 = sum of:
          0.07973496 = weight(_text_:network in 313) [ClassicSimilarity], result of:
            0.07973496 = score(doc=313,freq=4.0), product of:
              0.22917621 = queryWeight, product of:
                4.4533744 = idf(docFreq=1398, maxDocs=44218)
                0.05146125 = queryNorm
              0.34791988 = fieldWeight in 313, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.4533744 = idf(docFreq=1398, maxDocs=44218)
                0.0390625 = fieldNorm(doc=313)
          0.034861445 = weight(_text_:22 in 313) [ClassicSimilarity], result of:
            0.034861445 = score(doc=313,freq=2.0), product of:
              0.18020853 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05146125 = queryNorm
              0.19345059 = fieldWeight in 313, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=313)
      0.5 = coord(1/2)
    
    Abstract
    This study provides the first systematic, international, large-scale evidence on the extent and nature of multiple institutional affiliations on journal publications. Studying more than 15 million authors and 22 million articles from 40 countries we document that: In 2019, almost one in three articles was (co-)authored by authors with multiple affiliations and the share of authors with multiple affiliations increased from around 10% to 16% since 1996. The growth of multiple affiliations is prevalent in all fields and it is stronger in high impact journals. About 60% of multiple affiliations are between institutions from within the academic sector. International co-affiliations, which account for about a quarter of multiple affiliations, most often involve institutions from the United States, China, Germany and the United Kingdom, suggesting a core-periphery network. Network analysis also reveals a number communities of countries that are more likely to share affiliations. We discuss potential causes and show that the timing of the rise in multiple affiliations can be linked to the introduction of more competitive funding structures such as "excellence initiatives" in a number of countries. We discuss implications for science and science policy.
  6. Lorentzen, D.G.: Bridging polarised Twitter discussions : the interactions of the users in the middle (2021) 0.05
    0.05474554 = product of:
      0.10949108 = sum of:
        0.10949108 = sum of:
          0.06765735 = weight(_text_:network in 182) [ClassicSimilarity], result of:
            0.06765735 = score(doc=182,freq=2.0), product of:
              0.22917621 = queryWeight, product of:
                4.4533744 = idf(docFreq=1398, maxDocs=44218)
                0.05146125 = queryNorm
              0.29521978 = fieldWeight in 182, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.4533744 = idf(docFreq=1398, maxDocs=44218)
                0.046875 = fieldNorm(doc=182)
          0.041833732 = weight(_text_:22 in 182) [ClassicSimilarity], result of:
            0.041833732 = score(doc=182,freq=2.0), product of:
              0.18020853 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05146125 = queryNorm
              0.23214069 = fieldWeight in 182, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=182)
      0.5 = coord(1/2)
    
    Abstract
    Purpose The purpose of the paper is to analyse the interactions of bridging users in Twitter discussions about vaccination. Design/methodology/approach Conversational threads were collected through filtering the Twitter stream using keywords and the most active participants in the conversations. Following data collection and anonymisation of tweets and user profiles, a retweet network was created to find users bridging the main clusters. Four conversations were selected, ranging from 456 to 1,983 tweets long, and then analysed through content analysis. Findings Although different opinions met in the discussions, a consensus was rarely built. Many sub-threads involved insults and criticism, and participants seemed not interested in shifting their positions. However, examples of reasoned discussions were also found. Originality/value The study analyses conversations on Twitter, which is rarely studied. The focus on the interactions of bridging users adds to the uniqueness of the paper.
    Date
    20. 1.2015 18:30:22
  7. Park, Y.J.: ¬A socio-technological model of search information divide in US cities (2021) 0.05
    0.05474554 = product of:
      0.10949108 = sum of:
        0.10949108 = sum of:
          0.06765735 = weight(_text_:network in 184) [ClassicSimilarity], result of:
            0.06765735 = score(doc=184,freq=2.0), product of:
              0.22917621 = queryWeight, product of:
                4.4533744 = idf(docFreq=1398, maxDocs=44218)
                0.05146125 = queryNorm
              0.29521978 = fieldWeight in 184, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.4533744 = idf(docFreq=1398, maxDocs=44218)
                0.046875 = fieldNorm(doc=184)
          0.041833732 = weight(_text_:22 in 184) [ClassicSimilarity], result of:
            0.041833732 = score(doc=184,freq=2.0), product of:
              0.18020853 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05146125 = queryNorm
              0.23214069 = fieldWeight in 184, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=184)
      0.5 = coord(1/2)
    
    Abstract
    Purpose The purpose of the paper is to analyse the interactions of bridging users in Twitter discussions about vaccination. Design/methodology/approach Conversational threads were collected through filtering the Twitter stream using keywords and the most active participants in the conversations. Following data collection and anonymisation of tweets and user profiles, a retweet network was created to find users bridging the main clusters. Four conversations were selected, ranging from 456 to 1,983 tweets long, and then analysed through content analysis. Findings Although different opinions met in the discussions, a consensus was rarely built. Many sub-threads involved insults and criticism, and participants seemed not interested in shifting their positions. However, examples of reasoned discussions were also found. Originality/value The study analyses conversations on Twitter, which is rarely studied. The focus on the interactions of bridging users adds to the uniqueness of the paper.
    Date
    20. 1.2015 18:30:22
  8. Wu, Z.; Li, R.; Zhou, Z.; Guo, J.; Jiang, J.; Su, X.: ¬A user sensitive subject protection approach for book search service (2020) 0.05
    0.045621287 = product of:
      0.091242574 = sum of:
        0.091242574 = sum of:
          0.05638113 = weight(_text_:network in 5617) [ClassicSimilarity], result of:
            0.05638113 = score(doc=5617,freq=2.0), product of:
              0.22917621 = queryWeight, product of:
                4.4533744 = idf(docFreq=1398, maxDocs=44218)
                0.05146125 = queryNorm
              0.2460165 = fieldWeight in 5617, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.4533744 = idf(docFreq=1398, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5617)
          0.034861445 = weight(_text_:22 in 5617) [ClassicSimilarity], result of:
            0.034861445 = score(doc=5617,freq=2.0), product of:
              0.18020853 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05146125 = queryNorm
              0.19345059 = fieldWeight in 5617, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5617)
      0.5 = coord(1/2)
    
    Abstract
    In a digital library, book search is one of the most important information services. However, with the rapid development of network technologies such as cloud computing, the server-side of a digital library is becoming more and more untrusted; thus, how to prevent the disclosure of users' book query privacy is causing people's increasingly extensive concern. In this article, we propose to construct a group of plausible fake queries for each user book query to cover up the sensitive subjects behind users' queries. First, we propose a basic framework for the privacy protection in book search, which requires no change to the book search algorithm running on the server-side, and no compromise to the accuracy of book search. Second, we present a privacy protection model for book search to formulate the constraints that ideal fake queries should satisfy, that is, (i) the feature similarity, which measures the confusion effect of fake queries on users' queries, and (ii) the privacy exposure, which measures the cover-up effect of fake queries on users' sensitive subjects. Third, we discuss the algorithm implementation for the privacy model. Finally, the effectiveness of our approach is demonstrated by theoretical analysis and experimental evaluation.
    Date
    6. 1.2020 17:22:25
  9. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.04
    0.040867053 = product of:
      0.081734106 = sum of:
        0.081734106 = product of:
          0.24520232 = sum of:
            0.24520232 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.24520232 = score(doc=862,freq=2.0), product of:
                0.43628904 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05146125 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  10. Malik, N.; Spencer, D.; Bui, Q.N.: Power in the U.S. political economy : a network analysis (2021) 0.04
    0.037821613 = product of:
      0.07564323 = sum of:
        0.07564323 = product of:
          0.15128645 = sum of:
            0.15128645 = weight(_text_:network in 3811) [ClassicSimilarity], result of:
              0.15128645 = score(doc=3811,freq=10.0), product of:
                0.22917621 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.05146125 = queryNorm
                0.6601316 = fieldWeight in 3811, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3811)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Many features of the U.S. political economy arise from the interactions between large political and economic institutions, and yet we know little about the nature of their interactions and the power distribution between these institutions. In this paper, we present a detailed analysis of networks of U.S.-based organizations, where edges represent three different kinds of relationships, namely owner-owned (ownerships), donor-donee (donations), and service provider-payee (transactions). Our findings suggest that in the ownerships network, the financial organizations form the core, and banking organizations hold strategic locations in the network. In the transactions network, the government organizations and agencies form the core, and defense-related organizations form the backbone. In contrast, with the donations network, no specific domain of organizations forms either the core or the backbone.
  11. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.03
    0.03405588 = product of:
      0.06811176 = sum of:
        0.06811176 = product of:
          0.20433527 = sum of:
            0.20433527 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.20433527 = score(doc=5669,freq=2.0), product of:
                0.43628904 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05146125 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  12. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.03
    0.03405588 = product of:
      0.06811176 = sum of:
        0.06811176 = product of:
          0.20433527 = sum of:
            0.20433527 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.20433527 = score(doc=1000,freq=2.0), product of:
                0.43628904 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05146125 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  13. Jiang, Y.-C.; Li, H.: ¬The theoretical basis and basic principles of knowledge network construction in digital library (2023) 0.03
    0.033828676 = product of:
      0.06765735 = sum of:
        0.06765735 = product of:
          0.1353147 = sum of:
            0.1353147 = weight(_text_:network in 1130) [ClassicSimilarity], result of:
              0.1353147 = score(doc=1130,freq=8.0), product of:
                0.22917621 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.05146125 = queryNorm
                0.59043956 = fieldWeight in 1130, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1130)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Knowledge network construction (KNC) is the essence of dynamic knowledge architecture, and is helpful to illustrate ubiquitous knowledge service in digital libraries (DLs). The authors explore its theoretical foundations and basic rules to elucidate the basic principles of KNC in DLs. The results indicate that world general connection, small-world phenomenon, relevance theory, unity and continuity of science development have been the production tool, architecture aim and scientific foundation of KNC in DLs. By analyzing both the characteristics of KNC based on different types of knowledge linking and the relationships between different forms of knowledge and the appropriate ways of knowledge linking, the basic principle of KNC is summarized as follows: let each kind of knowledge linking form each shows its ability, each kind of knowledge manifestation each answer the purpose intended in practice, and then subjective knowledge network and objective knowledge network are organically combined. This will lay a solid theoretical foundation and provide an action guide for DLs to construct knowledge networks.
  14. He, C.; Wu, J.; Zhang, Q.: Proximity-aware research leadership recommendation in research collaboration via deep neural networks (2022) 0.03
    0.03151801 = product of:
      0.06303602 = sum of:
        0.06303602 = product of:
          0.12607203 = sum of:
            0.12607203 = weight(_text_:network in 446) [ClassicSimilarity], result of:
              0.12607203 = score(doc=446,freq=10.0), product of:
                0.22917621 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.05146125 = queryNorm
                0.5501096 = fieldWeight in 446, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=446)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Collaborator recommendation is of great significance for facilitating research collaboration. Proximities have been demonstrated to be significant factors and determinants of research collaboration. Research leadership is associated with not only the capability to integrate resources to launch and sustain the research project but also the production and academic impact of the collaboration team. However, existing studies mainly focus on social or cognitive proximity, failing to integrate critical proximities comprehensively. Besides, existing studies focus on recommending relationships among all the coauthors, ignoring leadership in research collaboration. In this article, we propose a proximity-aware research leadership recommendation (PRLR) model to systematically integrate critical node attribute information (critical proximities) and network features to conduct research leadership recommendation by predicting the directed links in the research leadership network. PRLR integrates cognitive, geographical, and institutional proximity as node attribute information and constructs a leadership-aware coauthorship network to preserve the research leadership information. PRLR learns the node attribute information, the local network features, and the global network features with an autoencoder model, a joint probability constraint, and an attribute-aware skip-gram model, respectively. Extensive experiments and ablation studies have been conducted, demonstrating that PRLR significantly outperforms the state-of-the-art collaborator recommendation models in research leadership recommendation.
  15. Zhu, Y.; Quan, L.; Chen, P.-Y.; Kim, M.C.; Che, C.: Predicting coauthorship using bibliographic network embedding (2023) 0.03
    0.03151801 = product of:
      0.06303602 = sum of:
        0.06303602 = product of:
          0.12607203 = sum of:
            0.12607203 = weight(_text_:network in 917) [ClassicSimilarity], result of:
              0.12607203 = score(doc=917,freq=10.0), product of:
                0.22917621 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.05146125 = queryNorm
                0.5501096 = fieldWeight in 917, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=917)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Coauthorship prediction applies predictive analytics to bibliographic data to predict authors who are highly likely to be coauthors. In this study, we propose an approach for coauthorship prediction based on bibliographic network embedding through a graph-based bibliographic data model that can be used to model common bibliographic data, including papers, terms, sources, authors, departments, research interests, universities, and countries. A real-world dataset released by AMiner that includes more than 2 million papers, 8 million citations, and 1.7 million authors were integrated into a large bibliographic network using the proposed bibliographic data model. Translation-based methods were applied to the entities and relationships to generate their low-dimensional embeddings while preserving their connectivity information in the original bibliographic network. We applied machine learning algorithms to embeddings that represent the coauthorship relationships of the two authors and achieved high prediction results. The reference model, which is the combination of a network embedding size of 100, the most basic translation-based method, and a gradient boosting method achieved an F1 score of 0.9 and even higher scores are obtainable with different embedding sizes and more advanced embedding methods. Thus, the strengths of the proposed approach lie in its customizable components under a unified framework.
  16. Jiang, X.; Liu, J.: Extracting the evolutionary backbone of scientific domains : the semantic main path network analysis approach based on citation context analysis (2023) 0.03
    0.03151801 = product of:
      0.06303602 = sum of:
        0.06303602 = product of:
          0.12607203 = sum of:
            0.12607203 = weight(_text_:network in 948) [ClassicSimilarity], result of:
              0.12607203 = score(doc=948,freq=10.0), product of:
                0.22917621 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.05146125 = queryNorm
                0.5501096 = fieldWeight in 948, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=948)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Main path analysis is a popular method for extracting the scientific backbone from the citation network of a research domain. Existing approaches ignored the semantic relationships between the citing and cited publications, resulting in several adverse issues, in terms of coherence of main paths and coverage of significant studies. This paper advocated the semantic main path network analysis approach to alleviate these issues based on citation function analysis. A wide variety of SciBERT-based deep learning models were designed for identifying citation functions. Semantic citation networks were built by either including important citations, for example, extension, motivation, usage and similarity, or excluding incidental citations like background and future work. Semantic main path network was built by merging the top-K main paths extracted from various time slices of semantic citation network. In addition, a three-way framework was proposed for the quantitative evaluation of main path analysis results. Both qualitative and quantitative analysis on three research areas of computational linguistics demonstrated that, compared to semantics-agnostic counterparts, different types of semantic main path networks provide complementary views of scientific knowledge flows. Combining them together, we obtained a more precise and comprehensive picture of domain evolution and uncover more coherent development pathways between scientific ideas.
  17. Kang, M.: Motivational affordances and survival of new askers on social Q&A sites : the case of Stack Exchange network (2022) 0.03
    0.029296497 = product of:
      0.058592994 = sum of:
        0.058592994 = product of:
          0.11718599 = sum of:
            0.11718599 = weight(_text_:network in 447) [ClassicSimilarity], result of:
              0.11718599 = score(doc=447,freq=6.0), product of:
                0.22917621 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.05146125 = queryNorm
                0.51133573 = fieldWeight in 447, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.046875 = fieldNorm(doc=447)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Social question-and-answer (Q&A) sites are platforms where users can freely ask, share, and rate knowledge. For the sustainable growth of social Q&A sites, maintaining askers is as critical as maintaining answerers. Based on motivational affordances theory and self-determination theory, this study explores the influence of the design elements of social Q&A sites (i.e., upvotes, downvotes, edits, user profile, and comments) on the survival of new askers. In addition, the moderating effect of having an alternative experience is examined. Online data on 25,000 new askers from the top five Q&A sites in the Technology category of the Stack Exchange network are analyzed using logistic regression. The results show that the competency- and autonomy-related design features of social Q&A sites motivate new askers to continue participating. Surprisingly, having an alternative experience shows a negative moderating effect, implying that alternative experiences increase switching costs in the Stack Exchange network. This study provides valuable insights for administrators of social Q&A sites as well as academics.
  18. Wang, P.; Li, X.: Assessing the quality of information on Wikipedia : a deep-learning approach (2020) 0.03
    0.028190564 = product of:
      0.05638113 = sum of:
        0.05638113 = product of:
          0.11276226 = sum of:
            0.11276226 = weight(_text_:network in 5505) [ClassicSimilarity], result of:
              0.11276226 = score(doc=5505,freq=8.0), product of:
                0.22917621 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.05146125 = queryNorm
                0.492033 = fieldWeight in 5505, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5505)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Currently, web document repositories have been collaboratively created and edited. One of these repositories, Wikipedia, is facing an important problem: assessing the quality of Wikipedia. Existing approaches exploit techniques such as statistical models or machine leaning algorithms to assess Wikipedia article quality. However, existing models do not provide satisfactory results. Furthermore, these models fail to adopt a comprehensive feature framework. In this article, we conduct an extensive survey of previous studies and summarize a comprehensive feature framework, including text statistics, writing style, readability, article structure, network, and editing history. Selected state-of-the-art deep-learning models, including the convolutional neural network (CNN), deep neural network (DNN), long short-term memory (LSTMs) network, CNN-LSTMs, bidirectional LSTMs, and stacked LSTMs, are applied to assess the quality of Wikipedia. A detailed comparison of deep-learning models is conducted with regard to different aspects: classification performance and training performance. We include an importance analysis of different features and feature sets to determine which features or feature sets are most effective in distinguishing Wikipedia article quality. This extensive experiment validates the effectiveness of the proposed model.
  19. Sun, J.; Zhu, M.; Jiang, Y.; Liu, Y.; Wu, L.L.: Hierarchical attention model for personalized tag recommendation : peer effects on information value perception (2021) 0.03
    0.028190564 = product of:
      0.05638113 = sum of:
        0.05638113 = product of:
          0.11276226 = sum of:
            0.11276226 = weight(_text_:network in 98) [ClassicSimilarity], result of:
              0.11276226 = score(doc=98,freq=8.0), product of:
                0.22917621 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.05146125 = queryNorm
                0.492033 = fieldWeight in 98, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=98)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    With the development of Web-based social networks, many personalized tag recommendation approaches based on multi-information have been proposed. Due to the differences in users' preferences, different users care about different kinds of information. In the meantime, different elements within each kind of information are differentially informative for user tagging behaviors. In this context, how to effectively integrate different elements and different information separately becomes a key part of tag recommendation. However, the existing methods ignore this key part. In order to address this problem, we propose a deep neural network for tag recommendation. Specifically, we model two important attentive aspects with a hierarchical attention model. For different user-item pairs, the bottom layered attention network models the influence of different elements on the features representation of the information while the top layered attention network models the attentive scores of different information. To verify the effectiveness of the proposed method, we conduct extensive experiments on two real-world data sets. The results show that using attention network and different kinds of information can significantly improve the performance of the recommendation model, and verify the effectiveness and superiority of our proposed model.
  20. Lee, S.; Benedict, B.C.; Ge, Y.G.; Murray-Tuite, P.; Ukkusuri, S.V.: ¬An application of media and network multiplexity theory to the structure and perceptions of information environments in hurricane evacuation (2021) 0.03
    0.028190564 = product of:
      0.05638113 = sum of:
        0.05638113 = product of:
          0.11276226 = sum of:
            0.11276226 = weight(_text_:network in 264) [ClassicSimilarity], result of:
              0.11276226 = score(doc=264,freq=8.0), product of:
                0.22917621 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.05146125 = queryNorm
                0.492033 = fieldWeight in 264, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=264)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Understanding how information use contributes to uncertainties surrounding evacuation decisions is crucial during disasters. While literature increasingly establishes that people consult multiple information sources in disaster situations, little is known about the patterns in which multiple media and personal network sources are combined simultaneously and sequentially across decision-making phases. We address this gap using survey data collected from households in Jacksonville, Florida affected by 2016's Hurricane Matthew. Results direct attention to perceived consistency of information as a key predictor of uncertainty regarding hurricane impact and evacuation logistics. Frequently utilizing National Weather Service, national and local TV channels, and personal network contacts contributed to higher perceived consistency of information, while the use of other local and online sources was associated with lower perceived consistency. Furthermore, combining a larger number of media and official sources predicted higher levels of perceived information consistency. One's perception of information amount did not significantly explain uncertainty. This study contributes to the theorizing of individuals' information environment from the perspective of media and network multiplexity and provides practical implications regarding the need of information coordination for improved evacuation decision-making.

Languages

  • e 121
  • d 30

Types

  • a 142
  • el 21
  • m 4
  • p 3
  • x 1
  • More… Less…