Search (3798 results, page 2 of 190)

  1. Julien, C.-A.; Tirilly, P.; Leide, J.E.; Guastavino, C.: Constructing a true LCSH tree of a science and engineering collection (2012) 0.08
    0.079546735 = product of:
      0.15909347 = sum of:
        0.15909347 = product of:
          0.31818694 = sum of:
            0.31818694 = weight(_text_:tree in 512) [ClassicSimilarity], result of:
              0.31818694 = score(doc=512,freq=10.0), product of:
                0.32745647 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.049953517 = queryNorm
                0.9716923 = fieldWeight in 512, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.046875 = fieldNorm(doc=512)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Library of Congress Subject Headings (LCSH) is a subject structure used to index large library collections throughout the world. Browsing a collection through LCSH is difficult using current online tools in part because users cannot explore the structure using their existing experience navigating file hierarchies on their hard drives. This is due to inconsistencies in the LCSH structure, which does not adhere to the specific rules defining tree structures. This article proposes a method to adapt the LCSH structure to reflect a real-world collection from the domain of science and engineering. This structure is transformed into a valid tree structure using an automatic process. The analysis of the resulting LCSH tree shows a large and complex structure. The analysis of the distribution of information within the LCSH tree reveals a power law distribution where the vast majority of subjects contain few information items and a few subjects contain the vast majority of the collection.
  2. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.08
    0.07933943 = product of:
      0.15867886 = sum of:
        0.15867886 = product of:
          0.47603655 = sum of:
            0.47603655 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.47603655 = score(doc=973,freq=2.0), product of:
                0.42350647 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.049953517 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
  3. Zhang, M.; Zhou, G.D.; Aw, A.: Exploring syntactic structured features over parse trees for relation extraction using kernel methods (2008) 0.08
    0.07843415 = product of:
      0.1568683 = sum of:
        0.1568683 = product of:
          0.3137366 = sum of:
            0.3137366 = weight(_text_:tree in 2055) [ClassicSimilarity], result of:
              0.3137366 = score(doc=2055,freq=14.0), product of:
                0.32745647 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.049953517 = queryNorm
                0.95810163 = fieldWeight in 2055, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2055)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Extracting semantic relationships between entities from text documents is challenging in information extraction and important for deep information processing and management. This paper proposes to use the convolution kernel over parse trees together with support vector machines to model syntactic structured information for relation extraction. Compared with linear kernels, tree kernels can effectively explore implicitly huge syntactic structured features embedded in a parse tree. Our study reveals that the syntactic structured features embedded in a parse tree are very effective in relation extraction and can be well captured by the convolution tree kernel. Evaluation on the ACE benchmark corpora shows that using the convolution tree kernel only can achieve comparable performance with previous best-reported feature-based methods. It also shows that our method significantly outperforms previous two dependency tree kernels for relation extraction. Moreover, this paper proposes a composite kernel for relation extraction by combining the convolution tree kernel with a simple linear kernel. Our study reveals that the composite kernel can effectively capture both flat and structured features without extensive feature engineering, and easily scale to include more features. Evaluation on the ACE benchmark corpora shows that the composite kernel outperforms previous best-reported methods in relation extraction.
  4. Yang, C.C.; Lin, J.; Wei, C.-P.: Retaining knowledge for document management : category-tree integration by exploiting category relationships and hierarchical structures (2010) 0.08
    0.07843415 = product of:
      0.1568683 = sum of:
        0.1568683 = product of:
          0.3137366 = sum of:
            0.3137366 = weight(_text_:tree in 3581) [ClassicSimilarity], result of:
              0.3137366 = score(doc=3581,freq=14.0), product of:
                0.32745647 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.049953517 = queryNorm
                0.95810163 = fieldWeight in 3581, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3581)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The category-tree document-classification structure is widely used by enterprises and information providers to organize, archive, and access documents for effective knowledge management. However, category trees from various sources use different hierarchical structures, which usually make mappings between categories in different category trees difficult. In this work, we propose a category-tree integration technique. We develop a method to learn the relationships between any two categories and develop operations such as mapping, splitting, and insertion for this integration. According to the parent-child relationship of the integrating categories, the developed decision rules use integration operations to integrate categories from the source category tree with those from the master category tree. A unified category tree can accumulate knowledge from multiple resources without forfeiting the knowledge in individual category trees. Experiments have been conducted to measure the performance of the integration operations and the accuracy of the integrated category trees. The proposed category-tree integration technique achieves greater than 80% integration accuracy, and the insert operation is the most frequently utilized, followed by map and split. The insert operation achieves 77% of F1 while the map and split operations achieves 86% and 29% of F1, respectively.
  5. Yager, R.R.: Knowledge trees and protoforms in question-answering systems (2006) 0.08
    0.07621066 = product of:
      0.15242133 = sum of:
        0.15242133 = sum of:
          0.11858127 = weight(_text_:tree in 5281) [ClassicSimilarity], result of:
            0.11858127 = score(doc=5281,freq=2.0), product of:
              0.32745647 = queryWeight, product of:
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.049953517 = queryNorm
              0.36212835 = fieldWeight in 5281, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5281)
          0.03384006 = weight(_text_:22 in 5281) [ClassicSimilarity], result of:
            0.03384006 = score(doc=5281,freq=2.0), product of:
              0.17492871 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049953517 = queryNorm
              0.19345059 = fieldWeight in 5281, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5281)
      0.5 = coord(1/2)
    
    Abstract
    We point out that question-answering systems differ from other information-seeking applications, such as search engines, by having a deduction capability, an ability to answer questions by a synthesis of information residing in different parts of its knowledge base. This capability requires appropriate representation of various types of human knowledge, rules for locally manipulating this knowledge, and a framework for providing a global plan for appropriately mobilizing the information in the knowledge to address the question posed. In this article we suggest tools to provide these capabilities. We describe how the fuzzy set-based theory of approximate reasoning can aid in the process of representing knowledge. We discuss how protoforms can be used to aid in deduction and local manipulation of knowledge. The idea of a knowledge tree is introduced to provide a global framework for mobilizing the knowledge base in response to a query. We look at some types of commonsense and default knowledge. This requires us to address the complexity of the nonmonotonicity that these types of knowledge often display. We also briefly discuss the role that Dempster-Shafer structures can play in representing knowledge.
    Date
    22. 7.2006 17:10:27
  6. Smiraglia, R.P.: ISKO 12's bookshelf - evolving intension : an editorial (2013) 0.08
    0.07621066 = product of:
      0.15242133 = sum of:
        0.15242133 = sum of:
          0.11858127 = weight(_text_:tree in 636) [ClassicSimilarity], result of:
            0.11858127 = score(doc=636,freq=2.0), product of:
              0.32745647 = queryWeight, product of:
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.049953517 = queryNorm
              0.36212835 = fieldWeight in 636, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.0390625 = fieldNorm(doc=636)
          0.03384006 = weight(_text_:22 in 636) [ClassicSimilarity], result of:
            0.03384006 = score(doc=636,freq=2.0), product of:
              0.17492871 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049953517 = queryNorm
              0.19345059 = fieldWeight in 636, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=636)
      0.5 = coord(1/2)
    
    Abstract
    The 2012 biennial international research conference of the International Society for Knowledge Organization was held August 6-9, in Mysore, India. It was the second international ISKO conference to be held in India (Canada and India are the only countries to have hosted two international ISKO conferences), and for many attendees travel to the exotic Indian subcontinent was a new experience. Interestingly, the mix of people attending was quite different from recent meetings held in Europe or North America. The conference was lively and, as usual, jam-packed with new research. Registration took place on a veranda in the garden of the B. N. Bahadur Institute of Management Sciences where the meetings were held at the University of Mysore. This graceful tree (Figure 1) kept us company and kept watch over our considerations (as indeed it does over the academic enterprise of the Institute). The conference theme was "Categories, Contexts and Relations in Knowledge Organization." The opening and closing sessions fittingly were devoted to serious introspection about the direction of the domain of knowledge organization. This editorial, in line with those following past international conferences, is an attempt to comment on the state of the domain by reflecting domain-analytically on the proceedings of the conference, primarily using bibliometric measures. In general, it seems the domain is secure in its intellectual moorings, as it continues to welcome a broad granular array of shifting research questionsin its intension. It seems that the continual concretizing of the theoretical core of knowledge organization (KO) seems to act as a catalyst for emergent ideas, which can be observed as part of the evolving intension of the domain.
    Date
    22. 2.2013 11:43:34
  7. Fóris, A.: Network theory and terminology (2013) 0.08
    0.07621066 = product of:
      0.15242133 = sum of:
        0.15242133 = sum of:
          0.11858127 = weight(_text_:tree in 1365) [ClassicSimilarity], result of:
            0.11858127 = score(doc=1365,freq=2.0), product of:
              0.32745647 = queryWeight, product of:
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.049953517 = queryNorm
              0.36212835 = fieldWeight in 1365, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1365)
          0.03384006 = weight(_text_:22 in 1365) [ClassicSimilarity], result of:
            0.03384006 = score(doc=1365,freq=2.0), product of:
              0.17492871 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049953517 = queryNorm
              0.19345059 = fieldWeight in 1365, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1365)
      0.5 = coord(1/2)
    
    Content
    Beitrag im Rahmen eines Special Issue: 'Paradigms of Knowledge and its Organization: The Tree, the Net and Beyond,' edited by Fulvio Mazzocchi and Gian Carlo Fedeli. - Vgl.: http://www.ergon-verlag.de/isko_ko/downloads/ko_40_2013_6_i.pdf.
    Date
    2. 9.2014 21:22:48
  8. Wu, I.-C.; Vakkari, P.: Effects of subject-oriented visualization tools on search by novices and intermediates (2018) 0.08
    0.07621066 = product of:
      0.15242133 = sum of:
        0.15242133 = sum of:
          0.11858127 = weight(_text_:tree in 4573) [ClassicSimilarity], result of:
            0.11858127 = score(doc=4573,freq=2.0), product of:
              0.32745647 = queryWeight, product of:
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.049953517 = queryNorm
              0.36212835 = fieldWeight in 4573, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4573)
          0.03384006 = weight(_text_:22 in 4573) [ClassicSimilarity], result of:
            0.03384006 = score(doc=4573,freq=2.0), product of:
              0.17492871 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049953517 = queryNorm
              0.19345059 = fieldWeight in 4573, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4573)
      0.5 = coord(1/2)
    
    Abstract
    This study explores how user subject knowledge influences search task processes and outcomes, as well as how search behavior is influenced by subject-oriented information visualization (IV) tools. To enable integrated searches, the proposed WikiMap + integrates search functions and IV tools (i.e., a topic network and hierarchical topic tree) and gathers information from Wikipedia pages and Google Search results. To evaluate the effectiveness of the proposed interfaces, we design subject-oriented tasks and adopt extended evaluation measures. We recruited 48 novices and 48 knowledgeable users, that is, intermediates, for the evaluation. Our results show that novices using the proposed interface demonstrate better search performance than intermediates using Wikipedia. We therefore conclude that our tools help close the gap between novices and intermediates in information searches. The results also show that intermediates can take advantage of the search tool by leveraging the IV tools to browse subtopics, and formulate better queries with less effort. We conclude that embedding the IV and the search tools in the interface can result in different search behavior but improved task performance. We provide implications to design search systems to include IV features adapted to user levels of subject knowledge to help them achieve better task performance.
    Date
    9.12.2018 16:22:25
  9. Craven, T.C.: Salient node notation (1979) 0.07
    0.07188608 = product of:
      0.14377216 = sum of:
        0.14377216 = product of:
          0.2875443 = sum of:
            0.2875443 = weight(_text_:tree in 1608) [ClassicSimilarity], result of:
              0.2875443 = score(doc=1608,freq=6.0), product of:
                0.32745647 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.049953517 = queryNorm
                0.87811464 = fieldWeight in 1608, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1608)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Salient node notation is a technique for decreasing the average length of notation in a classification scheme without sacrficing expressiveness or disturbing the succession of chracteristics of the filing order. Assignment of notation begins at a node of the classification tree other than the root. This salient node may be determined algorithmically, given data on the bias of the collection to be classified, even if only part of the tree has been developed. A dummy value is reserved to indicate upward movement in the tree. The technique is especially applicable to classification schemes for spevialized collections and to facets such as space in which the biases of human existence are especially prominent
  10. Merrin, G.: Access points and search methods in the Sibil system with special reference to Boolean and tree search (1986) 0.07
    0.07114877 = product of:
      0.14229754 = sum of:
        0.14229754 = product of:
          0.28459507 = sum of:
            0.28459507 = weight(_text_:tree in 1344) [ClassicSimilarity], result of:
              0.28459507 = score(doc=1344,freq=2.0), product of:
                0.32745647 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.049953517 = queryNorm
                0.8691081 = fieldWeight in 1344, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1344)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  11. Schabas, A.: Videotex information systems : complements to the tree structure (1982) 0.07
    0.07114877 = product of:
      0.14229754 = sum of:
        0.14229754 = product of:
          0.28459507 = sum of:
            0.28459507 = weight(_text_:tree in 42) [ClassicSimilarity], result of:
              0.28459507 = score(doc=42,freq=2.0), product of:
                0.32745647 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.049953517 = queryNorm
                0.8691081 = fieldWeight in 42, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.09375 = fieldNorm(doc=42)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  12. Craig, A.; Schriar, S.: ¬The Find-It! Illinois controlled vocabulary : improving access to government information through the Jessica subject tree (2001) 0.07
    0.07114877 = product of:
      0.14229754 = sum of:
        0.14229754 = product of:
          0.28459507 = sum of:
            0.28459507 = weight(_text_:tree in 6319) [ClassicSimilarity], result of:
              0.28459507 = score(doc=6319,freq=2.0), product of:
                0.32745647 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.049953517 = queryNorm
                0.8691081 = fieldWeight in 6319, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6319)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  13. White, K.J.; Sutcliffe, R.F.E.: Applying incremental tree induction to retrieval : from manuals and medical texts (2006) 0.07
    0.07114877 = product of:
      0.14229754 = sum of:
        0.14229754 = product of:
          0.28459507 = sum of:
            0.28459507 = weight(_text_:tree in 5044) [ClassicSimilarity], result of:
              0.28459507 = score(doc=5044,freq=8.0), product of:
                0.32745647 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.049953517 = queryNorm
                0.8691081 = fieldWeight in 5044, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5044)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Decision Tree Forest (DTF) is an architecture for information retrieval that uses a separate decision tree for each document in a collection. Experiments were conducted in which DTFs working with the incremental tree induction (ITI) algorithm of Utgoff, Berkman, and Clouse (1997) were trained and evaluated in the medical and word processing domains using the Cystic Fibrosis and SIFT collections. Performance was compared with that of a conventional inverted index system (IIS) using a BM25-derived probabilistic matching function. Initial results using DTF were poor compared to those obtained with IIS. We then simulated scenarios in which large quantities of training data were available, by using only those parts of the document collection that were well covered by the data sets. Consequently, the retrieval effectiveness of DTF improved substantially. In one particular experiment, precision and recall for DTF were 0.65 and 0.67 respectively, values that compared favorably with values of 0.49 and 0.56 for IIS.
  14. Li, J.; Zhang, Z.; Li, X.; Chen, H.: Kernel-based learning for biomedical relation extraction (2008) 0.07
    0.07114877 = product of:
      0.14229754 = sum of:
        0.14229754 = product of:
          0.28459507 = sum of:
            0.28459507 = weight(_text_:tree in 1611) [ClassicSimilarity], result of:
              0.28459507 = score(doc=1611,freq=8.0), product of:
                0.32745647 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.049953517 = queryNorm
                0.8691081 = fieldWeight in 1611, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1611)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Relation extraction is the process of scanning text for relationships between named entities. Recently, significant studies have focused on automatically extracting relations from biomedical corpora. Most existing biomedical relation extractors require manual creation of biomedical lexicons or parsing templates based on domain knowledge. In this study, we propose to use kernel-based learning methods to automatically extract biomedical relations from literature text. We develop a framework of kernel-based learning for biomedical relation extraction. In particular, we modified the standard tree kernel function by incorporating a trace kernel to capture richer contextual information. In our experiments on a biomedical corpus, we compare different kernel functions for biomedical relation detection and classification. The experimental results show that a tree kernel outperforms word and sequence kernels for relation detection, our trace-tree kernel outperforms the standard tree kernel, and a composite kernel outperforms individual kernels for relation extraction.
  15. Mazzocchi, F.: Images of thought and their relation to classification : the tree and the net (2013) 0.07
    0.07114877 = product of:
      0.14229754 = sum of:
        0.14229754 = product of:
          0.28459507 = sum of:
            0.28459507 = weight(_text_:tree in 1358) [ClassicSimilarity], result of:
              0.28459507 = score(doc=1358,freq=8.0), product of:
                0.32745647 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.049953517 = queryNorm
                0.8691081 = fieldWeight in 1358, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1358)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article takes a look at how images have been used through history as metaphors or models toillustrate (philosophical) ways of thinking with a special focus on figures of the tree and the net. It goes on to look at how classificatory thought depends on the epistemological framework in which it originates. Also examined is the Western model of classification and how it has favoured the logic of the tree, whose limitations are becoming increasingly apparent. The image of the net is then used to portray (as a pluriverse) the cognitive space of human knowledge, and a culturally-biased view of classification is upheld. Finally, some arguments are put forward to reformulate this view on the basis of an approach that combines epistemic and conceptual pluralism with a weak realism.
    Content
    Beitrag im Rahmen eines Special Issue: 'Paradigms of Knowledge and its Organization: The Tree, the Net and Beyond,' edited by Fulvio Mazzocchi and Gian Carlo Fedeli. - Vgl.: http://www.ergon-verlag.de/isko_ko/downloads/ko_40_2013_6_b.pdf.
  16. Chang, R.: ¬The development of indexing technology (1993) 0.07
    0.0670797 = product of:
      0.1341594 = sum of:
        0.1341594 = product of:
          0.2683188 = sum of:
            0.2683188 = weight(_text_:tree in 7024) [ClassicSimilarity], result of:
              0.2683188 = score(doc=7024,freq=4.0), product of:
                0.32745647 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.049953517 = queryNorm
                0.81940293 = fieldWeight in 7024, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7024)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Reviews the basic techniques of computerized indexing, including various file accessing methods such as: Sequential Access Method (SAM); Direct Access Method (DAM); Indexed Sequential Access Method (ISAM), and Virtual Indexed Sequential Access Method (VSAM); and various B-tree (balanced tree)structures. Illustrates how records are stored and accessed, and how B-trees are used to for improving the operations of information retrieval and maintenance
  17. Chang, R.: Keyword searching and indexing (1993) 0.07
    0.0670797 = product of:
      0.1341594 = sum of:
        0.1341594 = product of:
          0.2683188 = sum of:
            0.2683188 = weight(_text_:tree in 7223) [ClassicSimilarity], result of:
              0.2683188 = score(doc=7223,freq=4.0), product of:
                0.32745647 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.049953517 = queryNorm
                0.81940293 = fieldWeight in 7223, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7223)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Explains how a computer indexing system works. Reviews fundamentals of how data are stored and retrieved by computers. Describes B-Tree and B+-Tree indexing structures. Gives basic keyword searching techniques that the user must apply to make use of the indexing programs. The demand for keyword retrieval is increasing and librarians should expect to see the keyword-indexing feature become commonly available
  18. Chiba, K.; Kyojima, M.: Document transformation based on syntax-directed free translation (1995) 0.07
    0.0670797 = product of:
      0.1341594 = sum of:
        0.1341594 = product of:
          0.2683188 = sum of:
            0.2683188 = weight(_text_:tree in 4069) [ClassicSimilarity], result of:
              0.2683188 = score(doc=4069,freq=4.0), product of:
                0.32745647 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.049953517 = queryNorm
                0.81940293 = fieldWeight in 4069, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4069)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Presents a description system for the transformation of structured documents based on Context Free Grammars (CFGs). The system caters to transformations between different document class descriptions, and is presented mainly in Terms of logical structure transformation. Proposes 2 requirements for transformation; the output document class must be explicitly represented; and inconsistency must be avoided. Introducs a grammar for document class descriptions; tree-preserving Context Free Grammar, and gives Syntax-Directed Tree Translation (SDTT) for transformations of a document. The SDTT transformation is formal, concise and consistent with the above 2 requirements
  19. Sieverts, E.: Liever browsen dan zoeken (1998) 0.07
    0.0670797 = product of:
      0.1341594 = sum of:
        0.1341594 = product of:
          0.2683188 = sum of:
            0.2683188 = weight(_text_:tree in 4722) [ClassicSimilarity], result of:
              0.2683188 = score(doc=4722,freq=4.0), product of:
                0.32745647 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.049953517 = queryNorm
                0.81940293 = fieldWeight in 4722, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4722)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Despite development of the WWW searchers still experience difficulties following links between sites and cannot be sure that a site contains the required information. 3 software programs developed to guide users through the maze of hyperlinks are: Dynamic diagrams, the Hyperbolic tree, and the Brain. in contrast to the other programs which operate on webservers and display hyperlinks in diagrammatic form the Brain is installed on individual PCs and can be customised to meet users' requirements
    Object
    Hyperbolic tree
  20. Diaz, I.; Morato, J.; Lioréns, J.: ¬An algorithm for term conflation based on tree structures (2002) 0.07
    0.0670797 = product of:
      0.1341594 = sum of:
        0.1341594 = product of:
          0.2683188 = sum of:
            0.2683188 = weight(_text_:tree in 246) [ClassicSimilarity], result of:
              0.2683188 = score(doc=246,freq=4.0), product of:
                0.32745647 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.049953517 = queryNorm
                0.81940293 = fieldWeight in 246, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.0625 = fieldNorm(doc=246)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This work presents a new stemming algorithm. This algorithm stores the stemming information in tree structures. This storage allows us to enhance the performance of the algorithm due to the reduction of the search space and the overall complexity. The final result of that stemming algorithm is a normalized concept, understanding this process as the automatic extraction of the generic form (or a lexeme) for a selected term.

Languages

Types

  • a 3172
  • m 360
  • el 173
  • s 150
  • b 39
  • x 37
  • i 24
  • r 17
  • ? 8
  • p 4
  • d 3
  • n 3
  • u 2
  • z 2
  • au 1
  • h 1
  • More… Less…

Themes

Subjects

Classifications