Search (4 results, page 1 of 1)

  • × theme_ss:"Hypertext"
  • × year_i:[2010 TO 2020}
  1. Lima, G.A.B. de Oliveira: Conceptual modeling of hypertexts : methodological proposal for the management of semantic content in digital libraries (2012) 0.01
    0.0083865 = product of:
      0.050318997 = sum of:
        0.025159499 = weight(_text_:web in 451) [ClassicSimilarity], result of:
          0.025159499 = score(doc=451,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.21634221 = fieldWeight in 451, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=451)
        0.025159499 = weight(_text_:web in 451) [ClassicSimilarity], result of:
          0.025159499 = score(doc=451,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.21634221 = fieldWeight in 451, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=451)
      0.16666667 = coord(2/12)
    
    Abstract
    This research is focused on the continuation of the Hypertext Map prototype implementation - MHTX, proposed by Lima, (2004), with the general objective of transforming the MHTX into a semantic content management product facilitating navigation in context supported by customizable software that is easy to use, through high end desktop/web interfaces that sustain the operation of its functions. Besides, these studies aim, in the long run, to achieve the simplification of the information organization, access and recovery processes in digital libraries, making archive management by authors, content managers and information professionals possible.
  2. Ferreira, R.S.; Graça Pimentel, M. de; Cristo, M.: ¬A wikification prediction model based on the combination of latent, dyadic, and monadic features (2018) 0.01
    0.0069887503 = product of:
      0.0419325 = sum of:
        0.02096625 = weight(_text_:web in 4119) [ClassicSimilarity], result of:
          0.02096625 = score(doc=4119,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.18028519 = fieldWeight in 4119, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4119)
        0.02096625 = weight(_text_:web in 4119) [ClassicSimilarity], result of:
          0.02096625 = score(doc=4119,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.18028519 = fieldWeight in 4119, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4119)
      0.16666667 = coord(2/12)
    
    Abstract
    Considering repositories of web documents that are semantically linked and created in a collaborative fashion, as in the case of Wikipedia, a key problem faced by content providers is the placement of links in the articles. These links must support user navigation and provide a deeper semantic interpretation of the content. Current wikification methods exploit machine learning techniques to capture characteristics of the concepts and its associations. In previous work, we proposed a preliminary prediction model combining traditional predictors with a latent component which captures the concept graph topology by means of matrix factorization. In this work, we provide a detailed description of our method and a deeper comparison with a state-of-the-art wikification method using a sample of Wikipedia and report a gain up to 13% in F1 score. We also provide a comprehensive analysis of the model performance showing the importance of the latent predictor component and the attributes derived from the associations between the concepts. Moreover, we include an analysis that allows us to conclude that the model is resilient to ambiguity without including a disambiguation phase. We finally report the positive impact of selecting training samples from specific content quality classes.
  3. Finnemann, N.O.: Hypertext configurations : genres in networked digital media (2017) 0.00
    0.0045087244 = product of:
      0.05410469 = sum of:
        0.05410469 = weight(_text_:wide in 3525) [ClassicSimilarity], result of:
          0.05410469 = score(doc=3525,freq=2.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.342674 = fieldWeight in 3525, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3525)
      0.083333336 = coord(1/12)
    
    Abstract
    The article presents a conceptual framework for distinguishing different sorts of heterogeneous digital materials. The hypothesis is that a wide range of heterogeneous data resources can be characterized and classified due to their particular configurations of hypertext features such as scripts, links, interactive processes, and time scalings, and that the hypertext configuration is a major but not sole source of the messiness of big data. The notion of hypertext will be revalidated, placed at the center of the interpretation of networked digital media, and used in the analysis of the fast-growing amounts of heterogeneous digital collections, assemblages, and corpora. The introduction summarizes the wider background of a fast-changing data landscape.
  4. Baião Salgado Silva, G.; Lima, G.Â. Borém de Oliveira: Using topic maps in establishing compatibility of semantically structured hypertext contents (2012) 0.00
    0.0010058414 = product of:
      0.012070097 = sum of:
        0.012070097 = product of:
          0.024140194 = sum of:
            0.024140194 = weight(_text_:22 in 633) [ClassicSimilarity], result of:
              0.024140194 = score(doc=633,freq=2.0), product of:
                0.12478739 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035634913 = queryNorm
                0.19345059 = fieldWeight in 633, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=633)
          0.5 = coord(1/2)
      0.083333336 = coord(1/12)
    
    Date
    22. 2.2013 11:39:23