Search (2 results, page 1 of 1)

  • × author_ss:"Kim, Y.H."
  • × theme_ss:"Automatisches Abstracting"
  1. Kim, H.H.; Kim, Y.H.: Video summarization using event-related potential responses to shot boundaries in real-time video watching (2019) 0.01
    0.008069678 = product of:
      0.040348392 = sum of:
        0.040348392 = weight(_text_:context in 4685) [ClassicSimilarity], result of:
          0.040348392 = score(doc=4685,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.22896172 = fieldWeight in 4685, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4685)
      0.2 = coord(1/5)
    
    Abstract
    Our aim was to develop an event-related potential (ERP)-based method to construct a video skim consisting of key shots to bridge the semantic gap between the topic inferred from a whole video and that from its summary. Mayer's cognitive model was examined, wherein the topic integration process of a user evoked by a visual stimulus can be associated with long-latency ERP components. We determined that long-latency ERP components are suitable for measuring a user's neuronal response through a literature review. We hypothesized that N300 is specific to the categorization of all shots regardless of topic relevance, N400 is specific for the semantic mismatching process for topic-irrelevant shots, and P600 is specific for the context updating process for topic-relevant shots. In our experiment, the N400 component led to more negative ERP signals in response to topic-irrelevant shots than to topic-relevant shots and showed a fronto-central scalp pattern. P600 elicited more positive ERP signals for topic-relevant shots than for topic-irrelevant shots and showed a fronto-central scalp pattern. We used discriminant and artificial neural network (ANN) analyses to decode video shot relevance and observed that the ANN produced particularly high success rates: 91.3% from the training set and 100% from the test set.
  2. Kim, H.H.; Kim, Y.H.: Generic speech summarization of transcribed lecture videos : using tags and their semantic relations (2016) 0.01
    0.007715711 = product of:
      0.038578555 = sum of:
        0.038578555 = product of:
          0.057867832 = sum of:
            0.029064644 = weight(_text_:29 in 2640) [ClassicSimilarity], result of:
              0.029064644 = score(doc=2640,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.19432661 = fieldWeight in 2640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2640)
            0.028803186 = weight(_text_:22 in 2640) [ClassicSimilarity], result of:
              0.028803186 = score(doc=2640,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.19345059 = fieldWeight in 2640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2640)
          0.6666667 = coord(2/3)
      0.2 = coord(1/5)
    
    Date
    22. 1.2016 12:29:41