Search (69 results, page 1 of 4)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.12
    0.11813232 = sum of:
      0.09406087 = product of:
        0.2821826 = sum of:
          0.2821826 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.2821826 = score(doc=562,freq=2.0), product of:
              0.5020882 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.059222404 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.024071453 = product of:
        0.048142906 = sum of:
          0.048142906 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.048142906 = score(doc=562,freq=2.0), product of:
              0.20738676 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.059222404 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Agüera y Arcas, B.: Artificial neural networks are making strides towards consciousness (2022) 0.05
    0.052816946 = product of:
      0.10563389 = sum of:
        0.10563389 = product of:
          0.21126778 = sum of:
            0.21126778 = weight(_text_:making in 861) [ClassicSimilarity], result of:
              0.21126778 = score(doc=861,freq=4.0), product of:
                0.28297603 = queryWeight, product of:
                  4.778192 = idf(docFreq=1010, maxDocs=44218)
                  0.059222404 = queryNorm
                0.7465925 = fieldWeight in 861, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.778192 = idf(docFreq=1010, maxDocs=44218)
                  0.078125 = fieldNorm(doc=861)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    ¬The Economist. 2022, [https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas?giftId=89e08696-9884-4670-b164-df58fffdf067]
  3. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.05
    0.047030434 = product of:
      0.09406087 = sum of:
        0.09406087 = product of:
          0.2821826 = sum of:
            0.2821826 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.2821826 = score(doc=862,freq=2.0), product of:
                0.5020882 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.059222404 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  4. Hofstadter, D.: Artificial neural networks today are not conscious (2022) 0.04
    0.03734722 = product of:
      0.07469444 = sum of:
        0.07469444 = product of:
          0.14938888 = sum of:
            0.14938888 = weight(_text_:making in 860) [ClassicSimilarity], result of:
              0.14938888 = score(doc=860,freq=2.0), product of:
                0.28297603 = queryWeight, product of:
                  4.778192 = idf(docFreq=1010, maxDocs=44218)
                  0.059222404 = queryNorm
                0.5279206 = fieldWeight in 860, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.778192 = idf(docFreq=1010, maxDocs=44218)
                  0.078125 = fieldNorm(doc=860)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Vgl. auch: Agüera y Arcas, B.: Artificial neural networks are making strides towards consciousness..
  5. Warner, A.J.: Natural language processing (1987) 0.03
    0.032095272 = product of:
      0.064190544 = sum of:
        0.064190544 = product of:
          0.12838109 = sum of:
            0.12838109 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.12838109 = score(doc=337,freq=2.0), product of:
                0.20738676 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.059222404 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  6. Fox, B.; Fox, C.J.: Efficient stemmer generation (2002) 0.03
    0.029877776 = product of:
      0.059755553 = sum of:
        0.059755553 = product of:
          0.119511105 = sum of:
            0.119511105 = weight(_text_:making in 2585) [ClassicSimilarity], result of:
              0.119511105 = score(doc=2585,freq=2.0), product of:
                0.28297603 = queryWeight, product of:
                  4.778192 = idf(docFreq=1010, maxDocs=44218)
                  0.059222404 = queryNorm
                0.4223365 = fieldWeight in 2585, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.778192 = idf(docFreq=1010, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2585)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper presents an algorithm for generating stemmers from text stemmer specification files. A small study shows that the generated stemmers are computationally efficient, often running faster than stemmers custom written to implement particular stemming algorithms. The stemmer specification files are easily written and modified by non-programmers, making it much easier to create a stemmer, or tune a stemmer's performance, than would be the case with a custom stemmer program. Stemmer generation is thus also human-resource efficient.
  7. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.03
    0.028083362 = product of:
      0.056166723 = sum of:
        0.056166723 = product of:
          0.11233345 = sum of:
            0.11233345 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.11233345 = score(doc=3164,freq=2.0), product of:
                0.20738676 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.059222404 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
  8. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.03
    0.028083362 = product of:
      0.056166723 = sum of:
        0.056166723 = product of:
          0.11233345 = sum of:
            0.11233345 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.11233345 = score(doc=4506,freq=2.0), product of:
                0.20738676 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.059222404 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    8.10.2000 11:52:22
  9. Somers, H.: Example-based machine translation : Review article (1999) 0.03
    0.028083362 = product of:
      0.056166723 = sum of:
        0.056166723 = product of:
          0.11233345 = sum of:
            0.11233345 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.11233345 = score(doc=6672,freq=2.0), product of:
                0.20738676 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.059222404 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  10. New tools for human translators (1997) 0.03
    0.028083362 = product of:
      0.056166723 = sum of:
        0.056166723 = product of:
          0.11233345 = sum of:
            0.11233345 = weight(_text_:22 in 1179) [ClassicSimilarity], result of:
              0.11233345 = score(doc=1179,freq=2.0), product of:
                0.20738676 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.059222404 = queryNorm
                0.5416616 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1179)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  11. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.03
    0.028083362 = product of:
      0.056166723 = sum of:
        0.056166723 = product of:
          0.11233345 = sum of:
            0.11233345 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.11233345 = score(doc=3117,freq=2.0), product of:
                0.20738676 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.059222404 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    28. 2.1999 10:48:22
  12. ¬Der Student aus dem Computer (2023) 0.03
    0.028083362 = product of:
      0.056166723 = sum of:
        0.056166723 = product of:
          0.11233345 = sum of:
            0.11233345 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.11233345 = score(doc=1079,freq=2.0), product of:
                0.20738676 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.059222404 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 1.2023 16:22:55
  13. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.02
    0.024071453 = product of:
      0.048142906 = sum of:
        0.048142906 = product of:
          0.09628581 = sum of:
            0.09628581 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
              0.09628581 = score(doc=4483,freq=2.0), product of:
                0.20738676 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.059222404 = queryNorm
                0.46428138 = fieldWeight in 4483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4483)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    15. 3.2000 10:22:37
  14. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.02
    0.024071453 = product of:
      0.048142906 = sum of:
        0.048142906 = product of:
          0.09628581 = sum of:
            0.09628581 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.09628581 = score(doc=4888,freq=2.0), product of:
                0.20738676 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.059222404 = queryNorm
                0.46428138 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 3.2013 14:56:22
  15. Monnerjahn, P.: Vorsprung ohne Technik : Übersetzen: Computer und Qualität (2000) 0.02
    0.024071453 = product of:
      0.048142906 = sum of:
        0.048142906 = product of:
          0.09628581 = sum of:
            0.09628581 = weight(_text_:22 in 5429) [ClassicSimilarity], result of:
              0.09628581 = score(doc=5429,freq=2.0), product of:
                0.20738676 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.059222404 = queryNorm
                0.46428138 = fieldWeight in 5429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5429)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    c't. 2000, H.22, S.230-231
  16. Witschel, H.F.: Global and local resources for peer-to-peer text retrieval (2008) 0.02
    0.02264055 = product of:
      0.0452811 = sum of:
        0.0452811 = product of:
          0.0905622 = sum of:
            0.0905622 = weight(_text_:making in 127) [ClassicSimilarity], result of:
              0.0905622 = score(doc=127,freq=6.0), product of:
                0.28297603 = queryWeight, product of:
                  4.778192 = idf(docFreq=1010, maxDocs=44218)
                  0.059222404 = queryNorm
                0.3200349 = fieldWeight in 127, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.778192 = idf(docFreq=1010, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=127)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Chapter 5 empirically tackles the first of the two research questions formulated above, namely the question of global collection statistics. More precisely, it studies possibilities of radically simplified results merging. The simplification comes from the attempt - without having knowledge of the complete collection - to equip all peers with the same global statistics, making document scores comparable across peers. Chapter 5 empirically tackles the first of the two research questions formulated above, namely the question of global collection statistics. More precisely, it studies possibilities of radically simplified results merging. The simplification comes from the attempt - without having knowledge of the complete collection - to equip all peers with the same global statistics, making document scores comparable across peers. What is examined, is the question of how we can obtain such global statistics and to what extent their use will lead to a drop in retrieval effectiveness. In chapter 6, the second research question is tackled, namely that of making forwarding decisions for queries, based on profiles of other peers. After a review of related work in that area, the chapter first defines the approaches that will be compared against each other. Then, a novel evaluation framework is introduced, including a new measure for comparing results of a distributed search engine against those of a centralised one. Finally, the actual evaluation is performed using the new framework.
  17. Mustafa el Hadi, W.: Automatic term recognition & extraction tools : examining the new interfaces and their effective communication role in LSP discourse (1998) 0.02
    0.022408333 = product of:
      0.044816665 = sum of:
        0.044816665 = product of:
          0.08963333 = sum of:
            0.08963333 = weight(_text_:making in 67) [ClassicSimilarity], result of:
              0.08963333 = score(doc=67,freq=2.0), product of:
                0.28297603 = queryWeight, product of:
                  4.778192 = idf(docFreq=1010, maxDocs=44218)
                  0.059222404 = queryNorm
                0.31675237 = fieldWeight in 67, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.778192 = idf(docFreq=1010, maxDocs=44218)
                  0.046875 = fieldNorm(doc=67)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this paper we will discuss the possibility of reorienting NLP (Natural Language Processing) systems towards the extraction, not only of terms and their semantic relations, but also towards a variety of other uses; the storage, accessing and retrieving of Language for Special Purposes (LSPZ-20) lexical combinations, the provision of contexts and other information on terms through the integration of more interfaces to terminological data-bases, term managing systems and existing NLP systems. The aim of making such interfaces available is to increase the efficiency of the systems and improve the terminology-oriented text analysis. Since automatic term extraction is the backbone of many applications such as machine translation (MT), indexing, technical writing, thesaurus construction and knowledge representation developments in this area will have asignificant impact
  18. Zhou, L.; Zhang, D.: NLPIR: a theoretical framework for applying Natural Language Processing to information retrieval (2003) 0.02
    0.022408333 = product of:
      0.044816665 = sum of:
        0.044816665 = product of:
          0.08963333 = sum of:
            0.08963333 = weight(_text_:making in 5148) [ClassicSimilarity], result of:
              0.08963333 = score(doc=5148,freq=2.0), product of:
                0.28297603 = queryWeight, product of:
                  4.778192 = idf(docFreq=1010, maxDocs=44218)
                  0.059222404 = queryNorm
                0.31675237 = fieldWeight in 5148, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.778192 = idf(docFreq=1010, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5148)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Zhou and Zhang believe that for the potential of natural language processing NLP to be reached in information retrieval a framework for guiding the effort should be in place. They provide a graphic model that identifies different levels of natural language processing effort during the query, document matching process. A direct matching approach uses little NLP, an expansion approach with thesauri, little more, but an extraction approach will often use a variety of NLP techniques, as well as statistical methods. A transformation approach which creates intermediate representations of documents and queries is a step higher in NLP usage, and a uniform approach, which relies on a body of knowledge beyond that of the documents and queries to provide inference and sense making prior to matching would require a maximum NPL effort.
  19. Radford, A.; Narasimhan, K.; Salimans, T.; Sutskever, I.: Improving language understanding by Generative Pre-Training 0.02
    0.022408333 = product of:
      0.044816665 = sum of:
        0.044816665 = product of:
          0.08963333 = sum of:
            0.08963333 = weight(_text_:making in 870) [ClassicSimilarity], result of:
              0.08963333 = score(doc=870,freq=2.0), product of:
                0.28297603 = queryWeight, product of:
                  4.778192 = idf(docFreq=1010, maxDocs=44218)
                  0.059222404 = queryNorm
                0.31675237 = fieldWeight in 870, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.778192 = idf(docFreq=1010, maxDocs=44218)
                  0.046875 = fieldNorm(doc=870)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Natural language understanding comprises a wide range of diverse tasks such as textual entailment, question answering, semantic similarity assessment, and document classification. Although large unlabeled text corpora are abundant, labeled data for learning these specific tasks is scarce, making it challenging for discriminatively trained models to perform adequately. We demonstrate that large gains on these tasks can be realized by generative pre-training of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task. In contrast to previous approaches, we make use of task-aware input transformations during fine-tuning to achieve effective transfer while requiring minimal changes to the model architecture. We demonstrate the effectiveness of our approach on a wide range of benchmarks for natural language understanding. Our general task-agnostic model outperforms discriminatively trained models that use architectures specifically crafted for each task, significantly improving upon the state of the art in 9 out of the 12 tasks studied. For instance, we achieve absolute improvements of 8.9% on commonsense reasoning (Stories Cloze Test), 5.7% on question answering (RACE), and 1.5% on textual entailment (MultiNLI).
  20. Azpiazu, I.M.; Soledad Pera, M.: Is cross-lingual readability assessment possible? (2020) 0.02
    0.021126779 = product of:
      0.042253558 = sum of:
        0.042253558 = product of:
          0.084507115 = sum of:
            0.084507115 = weight(_text_:making in 5868) [ClassicSimilarity], result of:
              0.084507115 = score(doc=5868,freq=4.0), product of:
                0.28297603 = queryWeight, product of:
                  4.778192 = idf(docFreq=1010, maxDocs=44218)
                  0.059222404 = queryNorm
                0.298637 = fieldWeight in 5868, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.778192 = idf(docFreq=1010, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5868)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Most research efforts related to automatic readability assessment focus on the design of strategies that apply to a specific language. These state-of-the-art strategies are highly dependent on linguistic features that best suit the language for which they were intended, constraining their adaptability and making it difficult to determine whether they would remain effective if they were applied to estimate the level of difficulty of texts in other languages. In this article, we present the results of a study designed to determine the feasibility of a cross-lingual readability assessment strategy. For doing so, we first analyzed the most common features used for readability assessment and determined their influence on the readability prediction process of 6 different languages: English, Spanish, Basque, Italian, French, and Catalan. In addition, we developed a cross-lingual readability assessment strategy that serves as a means to empirically explore the potential advantages of employing a single strategy (and set of features) for readability assessment in different languages, including interlanguage prediction agreement and prediction accuracy improvement for low-resource languages.Friend request acceptance and information disclosure constitute 2 important privacy decisions for users to control the flow of their personal information in social network sites (SNSs). These decisions are greatly influenced by contextual characteristics of the request. However, the contextual influence may not be uniform among users with different levels of privacy concerns. In this study, we hypothesize that users with higher privacy concerns may consider contextual factors differently from those with lower privacy concerns. By conducting a scenario-based survey study and structural equation modeling, we verify the interaction effects between privacy concerns and contextual factors. We additionally find that users' perceived risk towards the requester mediates the effect of context and privacy concerns. These results extend our understanding about the cognitive process behind privacy decision making in SNSs. The interaction effects suggest strategies for SNS providers to predict user's friend request acceptance and to customize context-aware privacy decision support based on users' different privacy attitudes.

Years

Languages

  • e 51
  • d 18

Types

  • a 49
  • el 9
  • m 8
  • s 4
  • x 4
  • p 2
  • d 1
  • More… Less…