Search (732 results, page 2 of 37)

  • × year_i:[2010 TO 2020}
  1. Nagy T., I.: Detecting multiword expressions and named entities in natural language texts (2014) 0.04
    0.03645768 = product of:
      0.07291536 = sum of:
        0.07291536 = product of:
          0.14583072 = sum of:
            0.14583072 = weight(_text_:light in 1536) [ClassicSimilarity], result of:
              0.14583072 = score(doc=1536,freq=10.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.49938247 = fieldWeight in 1536, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1536)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Multiword expressions (MWEs) are lexical items that can be decomposed into single words and display lexical, syntactic, semantic, pragmatic and/or statistical idiosyncrasy (Sag et al., 2002; Kim, 2008; Calzolari et al., 2002). The proper treatment of multiword expressions such as rock 'n' roll and make a decision is essential for many natural language processing (NLP) applications like information extraction and retrieval, terminology extraction and machine translation, and it is important to identify multiword expressions in context. For example, in machine translation we must know that MWEs form one semantic unit, hence their parts should not be translated separately. For this, multiword expressions should be identified first in the text to be translated. The chief aim of this thesis is to develop machine learning-based approaches for the automatic detection of different types of multiword expressions in English and Hungarian natural language texts. In our investigations, we pay attention to the characteristics of different types of multiword expressions such as nominal compounds, multiword named entities and light verb constructions, and we apply novel methods to identify MWEs in raw texts. In the thesis it will be demonstrated that nominal compounds and multiword amed entities may require a similar approach for their automatic detection as they behave in the same way from a linguistic point of view. Furthermore, it will be shown that the automatic detection of light verb constructions can be carried out using two effective machine learning-based approaches.
    In this thesis, we focused on the automatic detection of multiword expressions in natural language texts. On the basis of the main contributions, we can argue that: - Supervised machine learning methods can be successfully applied for the automatic detection of different types of multiword expressions in natural language texts. - Machine learning-based multiword expression detection can be successfully carried out for English as well as for Hungarian. - Our supervised machine learning-based model was successfully applied to the automatic detection of nominal compounds from English raw texts. - We developed a Wikipedia-based dictionary labeling method to automatically detect English nominal compounds. - A prior knowledge of nominal compounds can enhance Named Entity Recognition, while previously identified named entities can assist the nominal compound identification process. - The machine learning-based method can also provide acceptable results when it was trained on an automatically generated silver standard corpus. - As named entities form one semantic unit and may consist of more than one word and function as a noun, we can treat them in a similar way to nominal compounds. - Our sequence labelling-based tool can be successfully applied for identifying verbal light verb constructions in two typologically different languages, namely English and Hungarian. - Domain adaptation techniques may help diminish the distance between domains in the automatic detection of light verb constructions. - Our syntax-based method can be successfully applied for the full-coverage identification of light verb constructions. As a first step, a data-driven candidate extraction method can be utilized. After, a machine learning approach that makes use of an extended and rich feature set selects LVCs among extracted candidates. - When a precise syntactic parser is available for the actual domain, the full-coverage identification can be performed better. In other cases, the usage of the sequence labeling method is recommended.
  2. Farazi, M.: Faceted lightweight ontologies : a formalization and some experiments (2010) 0.03
    0.033462033 = product of:
      0.066924065 = sum of:
        0.066924065 = product of:
          0.20077218 = sum of:
            0.20077218 = weight(_text_:3a in 4997) [ClassicSimilarity], result of:
              0.20077218 = score(doc=4997,freq=2.0), product of:
                0.42868128 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050563898 = queryNorm
                0.46834838 = fieldWeight in 4997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    PhD Dissertation at International Doctorate School in Information and Communication Technology. Vgl.: https%3A%2F%2Fcore.ac.uk%2Fdownload%2Fpdf%2F150083013.pdf&usg=AOvVaw2n-qisNagpyT0lli_6QbAQ.
  3. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.03
    0.033462033 = product of:
      0.066924065 = sum of:
        0.066924065 = product of:
          0.20077218 = sum of:
            0.20077218 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.20077218 = score(doc=4388,freq=2.0), product of:
                0.42868128 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050563898 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  4. Piros, A.: Az ETO-jelzetek automatikus interpretálásának és elemzésének kérdései (2018) 0.03
    0.033462033 = product of:
      0.066924065 = sum of:
        0.066924065 = product of:
          0.20077218 = sum of:
            0.20077218 = weight(_text_:3a in 855) [ClassicSimilarity], result of:
              0.20077218 = score(doc=855,freq=2.0), product of:
                0.42868128 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050563898 = queryNorm
                0.46834838 = fieldWeight in 855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=855)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Vgl. auch: New automatic interpreter for complex UDC numbers. Unter: <https%3A%2F%2Fudcc.org%2Ffiles%2FAttilaPiros_EC_36-37_2014-2015.pdf&usg=AOvVaw3kc9CwDDCWP7aArpfjrs5b>
  5. Pimentel, D.M.: Examining the KO roots of Taylor's value-added model (2010) 0.03
    0.03260874 = product of:
      0.06521748 = sum of:
        0.06521748 = product of:
          0.13043496 = sum of:
            0.13043496 = weight(_text_:light in 3286) [ClassicSimilarity], result of:
              0.13043496 = score(doc=3286,freq=2.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.44666123 = fieldWeight in 3286, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3286)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Value-Added Model, as developed by Robert Taylor in his 1986 monograph Value-Added Processes in Information Systems, has been highly influential in the field of library and information science. Yet despite its impact on the broader LIS field, the potential of the Value-Added Model has gone largely unexplored by knowledge organization (KO) researchers. Unraveling the history behind the Model's development highlights the significant contributions made by studying the work practices of professional indexers. In light of its foundation on KO praxis, this paper reexamines Taylor's Model as a robust framework for evaluating knowledge organization systems.
  6. Hudon, M.: Teaching Classification, 1990-2010 (2010) 0.03
    0.03260874 = product of:
      0.06521748 = sum of:
        0.06521748 = product of:
          0.13043496 = sum of:
            0.13043496 = weight(_text_:light in 3569) [ClassicSimilarity], result of:
              0.13043496 = score(doc=3569,freq=2.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.44666123 = fieldWeight in 3569, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3569)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Cataloging and classification education has been a recurring topic in the library and information science literature since the creation of the first library school toward the end of the nineteenth century. This article examines the literature of the past 20 years, in an era of major changes in the way documents and information transit from their creators to their ultimate users. It concludes by suggesting several aspects of classification education that need to be investigated further, in light of these new circumstances.
  7. Rayson, P.; Piao, S.; Sharoff, S.; Evert, S.; Moiron, B.V.: Multiword expressions : hard going or plain sailing? (2015) 0.03
    0.03260874 = product of:
      0.06521748 = sum of:
        0.06521748 = product of:
          0.13043496 = sum of:
            0.13043496 = weight(_text_:light in 2918) [ClassicSimilarity], result of:
              0.13043496 = score(doc=2918,freq=2.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.44666123 = fieldWeight in 2918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2918)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Over the past two decades or so, Multi-Word Expressions (MWEs; also called Multi-word Units) have been an increasingly important concern for Computational Linguistics and Natural Language Processing (NLP). The term MWE has been used to refer to various types of linguistic units and expressions, including idioms, noun compounds, phrasal verbs, light verbs and other habitual collocations. However, while there is no universally agreed definition for MWE as yet, most researchers use the term to refer to those frequently occurring phrasal units which are subject to certain level of semantic opaqueness, or non-compositionality. Non-compositional MWEs pose tough challenges for automatic analysis because their interpretation cannot be achieved by directly combining the semantics of their constituents, thereby causing the "pain in the neck of NLP".
  8. Kleineberg, M.: Integrative levels (2017) 0.03
    0.03260874 = product of:
      0.06521748 = sum of:
        0.06521748 = product of:
          0.13043496 = sum of:
            0.13043496 = weight(_text_:light in 3840) [ClassicSimilarity], result of:
              0.13043496 = score(doc=3840,freq=2.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.44666123 = fieldWeight in 3840, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3840)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article provides a historical overview and conceptual clarification of the idea of integrative levels as an organizing principle. It will be demonstrated that this concept has found different articulations (e.g., levels of integration, levels of organization, levels of complexity, levels of granularity, nested hierarchy, specification hierarchy, hierarchical integration, progressive integration, holarchy, superformation, self-organization cycles) and widespread applications based on various, often unrelated theoretical and disciplinary backgrounds. In order to determine its role in the field of knowledge organization, some common misconceptions and major criticisms will be reconsidered in light of a broader multidisciplinary context. In particular, it will be shown how this organizing principle has been fruitfully applied to human-related research areas such as psychology, social sciences, or humanities in terms of integrative levels of knowing.
  9. Nahotko, M.: Genre groups in knowledge organization (2016) 0.03
    0.03260874 = product of:
      0.06521748 = sum of:
        0.06521748 = product of:
          0.13043496 = sum of:
            0.13043496 = weight(_text_:light in 5139) [ClassicSimilarity], result of:
              0.13043496 = score(doc=5139,freq=2.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.44666123 = fieldWeight in 5139, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5139)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The article is an introduction to the development of Andersen's concept of textual tools used in knowledge organization (KO) in light of the theory of genres and activity systems. In particular, the question is based on the concepts of genre connectivity and genre group, in addition to previously established concepts such as genre hierarchy, set, system, and repertoire. Five genre groups used in KO are described. The analysis of groups, systems, and selected genres used in KO is provided, based on the method proposed by Yates and Orlikowski. The aim is to show the genre system as a part of the activity system, and thus as a framework for KO.
  10. Scharl, A.; Hubmann-Haidvogel, A.H.; Jones, A.; Fischl, D.; Kamolov, R.; Weichselbraun, A.; Rafelsberger, W.: Analyzing the public discourse on works of fiction : detection and visualization of emotion in online coverage about HBO's Game of Thrones (2016) 0.03
    0.027950348 = product of:
      0.055900697 = sum of:
        0.055900697 = product of:
          0.11180139 = sum of:
            0.11180139 = weight(_text_:light in 842) [ClassicSimilarity], result of:
              0.11180139 = score(doc=842,freq=2.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.3828525 = fieldWeight in 842, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.046875 = fieldNorm(doc=842)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper presents a Web intelligence portal that captures and aggregates news and social media coverage about "Game of Thrones", an American drama television series created for the HBO television network based on George R.R. Martin's series of fantasy novels. The system collects content from the Web sites of Anglo-American news media as well as from four social media platforms: Twitter, Facebook, Google+ and YouTube. An interactive dashboard with trend charts and synchronized visual analytics components not only shows how often Game of Thrones events and characters are being mentioned by journalists and viewers, but also provides a real-time account of concepts that are being associated with the unfolding storyline and each new episode. Positive or negative sentiment is computed automatically, which sheds light on the perception of actors and new plot elements.
  11. Antin, J.; Earp, M.: With a little help from my friends : self-interested and prosocial behavior on MySpace Music (2010) 0.03
    0.027950348 = product of:
      0.055900697 = sum of:
        0.055900697 = product of:
          0.11180139 = sum of:
            0.11180139 = weight(_text_:light in 3458) [ClassicSimilarity], result of:
              0.11180139 = score(doc=3458,freq=2.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.3828525 = fieldWeight in 3458, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3458)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this article, we explore the dynamics of prosocial and self-interested behavior among musicians on MySpace Music. MySpace Music is an important platform for social interactions and at the same time provides musicians with the opportunity for significant profit. We argue that these forces can be in tension with each other, encouraging musicians to make strategic choices about using MySpace to promote their own or others' rewards. We look for evidence of self-interested and prosocial friending strategies in the social network created by Top Friends links. We find strong evidence that individual preferences for prosocial and self-interested behavior influence friending strategies. Furthermore, our data illustrate a robust relationship between increased prominence and increased attention to others' rewards. These results shed light on how musicians manage their interactions in complex online environments and extend research on social values by demonstrating consistent preferences for prosocial or self-interested behavior in a multifaceted online setting.
  12. Halpin, H.; Hayes, P.J.; McCusker, J.P.; McGuinness, D.L.; Thompson, H.S.: When owl:sameAs isn't the same : an analysis of identity in linked data (2010) 0.03
    0.027950348 = product of:
      0.055900697 = sum of:
        0.055900697 = product of:
          0.11180139 = sum of:
            0.11180139 = weight(_text_:light in 4703) [ClassicSimilarity], result of:
              0.11180139 = score(doc=4703,freq=2.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.3828525 = fieldWeight in 4703, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4703)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In Linked Data, the use of owl:sameAs is ubiquitous in interlinking data-sets. There is however, ongoing discussion about its use, and potential misuse, particularly with regards to interactions with inference. In fact, owl:sameAs can be viewed as encoding only one point on a scale of similarity, one that is often too strong for many of its current uses. We describe how referentially opaque contexts that do not allow inference exist, and then outline some varieties of referentially-opaque alternatives to owl:sameAs. Finally, we report on an empirical experiment over randomly selected owl:sameAs statements from the Web of data. This theoretical apparatus and experiment shed light upon how owl:sameAs is being used (and misused) on the Web of data.
  13. Cheng, A.-S.; Fleischmann, K.R.; Wang, P.; Ishita, E.; Oard, D.W.: ¬The role of innovation and wealth in the net neutrality debate : a content analysis of human values in congressional and FCC hearings (2012) 0.03
    0.027950348 = product of:
      0.055900697 = sum of:
        0.055900697 = product of:
          0.11180139 = sum of:
            0.11180139 = weight(_text_:light in 276) [ClassicSimilarity], result of:
              0.11180139 = score(doc=276,freq=2.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.3828525 = fieldWeight in 276, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.046875 = fieldNorm(doc=276)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Net neutrality is the focus of an important policy debate that is tied to technological innovation, economic development, and information access. We examine the role of human values in shaping the Net neutrality debate through a content analysis of testimonies from U.S. Senate and FCC hearings on Net neutrality. The analysis is based on a coding scheme that we developed based on a pilot study in which we used the Schwartz Value Inventory. We find that the policy debate surrounding Net neutrality revolves primarily around differences in the frequency of expression of the values of innovation and wealth, such that the proponents of Net neutrality more frequently invoke innovation, while the opponents of Net neutrality more frequently invoke wealth in their prepared testimonies. The paper provides a novel approach for examining the Net neutrality debate and sheds light on the connection between information policy and research on human values.
  14. Mai, J.-E.: ¬The quality and qualities of information (2013) 0.03
    0.027950348 = product of:
      0.055900697 = sum of:
        0.055900697 = product of:
          0.11180139 = sum of:
            0.11180139 = weight(_text_:light in 679) [ClassicSimilarity], result of:
              0.11180139 = score(doc=679,freq=2.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.3828525 = fieldWeight in 679, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.046875 = fieldNorm(doc=679)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The paper discusses and analyzes the notion of information quality in terms of a pragmatic philosophy of language. It is argued that the notion of information quality is of great importance, and needs to be situated better within a sound philosophy of information to help frame information quality in a broader conceptual light. It is found that much research on information quality conceptualizes information quality as either an inherent property of the information itself, or as an individual mental construct of the users. The notion of information quality is often not situated within a philosophy of information. This paper outlines a conceptual framework in which information is regarded as a semiotic sign, and extends that notion with Paul Grice's pragmatic philosophy of language to provide a conversational notion of information quality that is contextual and tied to the notion of meaning.
  15. Mirel, B.; Tonks, J.S; Song, J.; Meng, F.; Xuan, W.; Ameziane, R.: Studying PubMed usages in the field for complex problem solving : implications for tool design (2013) 0.03
    0.027950348 = product of:
      0.055900697 = sum of:
        0.055900697 = product of:
          0.11180139 = sum of:
            0.11180139 = weight(_text_:light in 738) [ClassicSimilarity], result of:
              0.11180139 = score(doc=738,freq=2.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.3828525 = fieldWeight in 738, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.046875 = fieldNorm(doc=738)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Many recent studies on MEDLINE-based information seeking have shed light on scientists' behaviors and associated tool innovations that may improve efficiency and effectiveness. Few, if any, studies, however, examine scientists' problem-solving uses of PubMed in actual contexts of work and corresponding needs for better tool support. Addressing this gap, we conducted a field study of novice scientists (14 upper-level undergraduate majors in molecular biology) as they engaged in a problem-solving activity with PubMed in a laboratory setting. Findings reveal many common stages and patterns of information seeking across users as well as variations, especially variations in cognitive search styles. Based on these findings, we suggest tool improvements that both confirm and qualify many results found in other recent studies. Our findings highlight the need to use results from context-rich studies to inform decisions in tool design about when to offer improved features to users.
  16. Thelwall, M.; Buckley, K.: Topic-based sentiment analysis for the social web : the role of mood and issue-related words (2013) 0.03
    0.027950348 = product of:
      0.055900697 = sum of:
        0.055900697 = product of:
          0.11180139 = sum of:
            0.11180139 = weight(_text_:light in 1004) [ClassicSimilarity], result of:
              0.11180139 = score(doc=1004,freq=2.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.3828525 = fieldWeight in 1004, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1004)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    General sentiment analysis for the social web has become increasingly useful for shedding light on the role of emotion in online communication and offline events in both academic research and data journalism. Nevertheless, existing general-purpose social web sentiment analysis algorithms may not be optimal for texts focussed around specific topics. This article introduces 2 new methods, mood setting and lexicon extension, to improve the accuracy of topic-specific lexical sentiment strength detection for the social web. Mood setting allows the topic mood to determine the default polarity for ostensibly neutral expressive text. Topic-specific lexicon extension involves adding topic-specific words to the default general sentiment lexicon. Experiments with 8 data sets show that both methods can improve sentiment analysis performance in corpora and are recommended when the topic focus is tightest.
  17. Wu, P.F.; Korfiatis, N.: You scratch someone's back and we'll scratch yours : collective reciprocity in social Q&A communities (2013) 0.03
    0.027950348 = product of:
      0.055900697 = sum of:
        0.055900697 = product of:
          0.11180139 = sum of:
            0.11180139 = weight(_text_:light in 1079) [ClassicSimilarity], result of:
              0.11180139 = score(doc=1079,freq=2.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.3828525 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Taking a structuration perspective and integrating reciprocity research in economics, this study examines the dynamics of reciprocal interactions in social question & answer communities. We postulate that individual users of social Q&A constantly adjust their kindness in the direction of the observed benefit and effort of others. Collective reciprocity emerges from this pattern of conditional strategy of reciprocation and helps form a structure that guides the very interactions that give birth to the structure. Based on a large sample of data from Yahoo! Answers, our empirical analysis supports the collective reciprocity premise, showing that the more effort (relative to benefit) an asker contributes to the community, the more likely the community will return the favor. On the other hand, the more benefit (relative to effort) the asker takes from the community, the less likely the community will cooperate in terms of providing answers. We conclude that a structuration view of reciprocity sheds light on the duality of social norms in online communities.
  18. Zhang, L.: Linking information through function (2014) 0.03
    0.027950348 = product of:
      0.055900697 = sum of:
        0.055900697 = product of:
          0.11180139 = sum of:
            0.11180139 = weight(_text_:light in 1526) [ClassicSimilarity], result of:
              0.11180139 = score(doc=1526,freq=2.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.3828525 = fieldWeight in 1526, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1526)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    How information resources can be meaningfully related has been addressed in contexts from bibliographic entries to hyperlinks and, more recently, linked data. The genre structure and relationships among genre structure constituents shed new light on organizing information by purpose or function. This study examines the relationships among a set of functional units previously constructed in a taxonomy, each of which is a chunk of information embedded in a document and is distinct in terms of its communicative function. Through a card-sort study, relationships among functional units were identified with regard to their occurrence and function. The findings suggest that a group of functional units can be identified, collocated, and navigated by particular relationships. Understanding how functional units are related to each other is significant in linking information pieces in documents to support finding, aggregating, and navigating information in a distributed information environment.
  19. Kempf, A.O.; Baum, K.: Thesaurus-based indexing of research data in the social sciences : opportunities and difficulties of internationalization efforts (2013) 0.03
    0.027950348 = product of:
      0.055900697 = sum of:
        0.055900697 = product of:
          0.11180139 = sum of:
            0.11180139 = weight(_text_:light in 1656) [ClassicSimilarity], result of:
              0.11180139 = score(doc=1656,freq=2.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.3828525 = fieldWeight in 1656, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1656)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Efforts towards internationalization have become increasingly important in scientific environments. As for content-based indexing of scientific research data, however, standards leading to internationally coherent indexing which is vital for retrieval purposes are not yet sufficiently developed. Even concerning the concrete use of indexing instruments, launched by initiatives on an international scale, there are still no binding policies and guidelines. Against this backdrop, essential criteria which internationally applicable indexing systems should meet will be outlined. These will be illustrated through the multilingual European Language Social Science Thesaurus (ELSST), originally based on the UK Data Archive's (UKDA) Humanities and Social Science Electronic Thesaurus (HASSET) and ultimately developed by the Council of European Social Science Data Archives (CESSDA). Additionally, the general pros and cons of using international versus national indexing languages will be weighed using the ELSST and the Thesaurus for the Social Sciences (TSS) developed by GESIS - Leibniz-Institute for the Social Sciences. In this light, the benefit of vocabulary crosswalks for supporting a combined use of international and national indexing systems will be discussed.
  20. Tan, B.; Pan, S.L.; Zuo, M.: Harnessing collective IT resources for sustainability : insights from the green leadership strategy of China mobile (2015) 0.03
    0.027950348 = product of:
      0.055900697 = sum of:
        0.055900697 = product of:
          0.11180139 = sum of:
            0.11180139 = weight(_text_:light in 1731) [ClassicSimilarity], result of:
              0.11180139 = score(doc=1731,freq=2.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.3828525 = fieldWeight in 1731, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1731)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Green information technology (IT) initiatives cannot be implemented in isolation if they are to have a significant and lasting impact on environmental sustainability. Instead, there is a need to harness the collective IT resources of the diverse stakeholders operating in the interorganizational business networks that characterize the contemporary business landscape. This, in turn, demands an appropriate leadership structure. However, the notion of "green leadership" has not received adequate research attention to date. Using a case study of green IT implementation at China Mobile, the world's largest mobile telecommunications provider, this study seeks to shed light on the underlying process through which green leadership is achieved and subsequently enacted to facilitate collective green IT initiatives. With its findings, this study presents a process theory that complements the dominant, internally-oriented perspective of green IT and provides practitioners with a useful reference for leveraging the collective IT resources of their network partners to contribute toward preserving the environment for future generations.

Languages

  • e 542
  • d 181
  • a 1
  • hu 1
  • sp 1
  • More… Less…

Types

  • a 642
  • el 66
  • m 47
  • s 16
  • x 13
  • r 7
  • b 5
  • i 1
  • z 1
  • More… Less…

Themes

Subjects

Classifications