Search (1 results, page 1 of 1)

  • × author_ss:"Yang, T.-H."
  • × theme_ss:"Automatisches Indexieren"
  1. Yang, T.-H.; Hsieh, Y.-L.; Liu, S.-H.; Chang, Y.-C.; Hsu, W.-L.: ¬A flexible template generation and matching method with applications for publication reference metadata extraction (2021) 0.03
    0.02752163 = product of:
      0.05504326 = sum of:
        0.05504326 = product of:
          0.11008652 = sum of:
            0.11008652 = weight(_text_:memory in 63) [ClassicSimilarity], result of:
              0.11008652 = score(doc=63,freq=2.0), product of:
                0.31615055 = queryWeight, product of:
                  6.30326 = idf(docFreq=219, maxDocs=44218)
                  0.050156675 = queryNorm
                0.34820917 = fieldWeight in 63, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.30326 = idf(docFreq=219, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=63)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Conventional rule-based approaches use exact template matching to capture linguistic information and necessarily need to enumerate all variations. We propose a novel flexible template generation and matching scheme called the principle-based approach (PBA) based on sequence alignment, and employ it for reference metadata extraction (RME) to demonstrate its effectiveness. The main contributions of this research are threefold. First, we propose an automatic template generation that can capture prominent patterns using the dominating set algorithm. Second, we devise an alignment-based template-matching technique that uses a logistic regression model, which makes it more general and flexible than pure rule-based approaches. Last, we apply PBA to RME on extensive cross-domain corpora and demonstrate its robustness and generality. Experiments reveal that the same set of templates produced by the PBA framework not only deliver consistent performance on various unseen domains, but also surpass hand-crafted knowledge (templates). We use four independent journal style test sets and one conference style test set in the experiments. When compared to renowned machine learning methods, such as conditional random fields (CRF), as well as recent deep learning methods (i.e., bi-directional long short-term memory with a CRF layer, Bi-LSTM-CRF), PBA has the best performance for all datasets.