Search (4318 results, page 1 of 216)

  1. Falquet, G.; Guyot, J.; Nerima, L.: Languages and tools to specify hypertext views on databases (1999) 0.28
    0.27746326 = sum of:
      0.06325436 = product of:
        0.18976305 = sum of:
          0.18976305 = weight(_text_:objects in 3968) [ClassicSimilarity], result of:
            0.18976305 = score(doc=3968,freq=6.0), product of:
              0.31094646 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.058502786 = queryNorm
              0.6102756 = fieldWeight in 3968, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.046875 = fieldNorm(doc=3968)
        0.33333334 = coord(1/3)
      0.2142089 = sum of:
        0.16665098 = weight(_text_:tree in 3968) [ClassicSimilarity], result of:
          0.16665098 = score(doc=3968,freq=2.0), product of:
            0.38349885 = queryWeight, product of:
              6.5552235 = idf(docFreq=170, maxDocs=44218)
              0.058502786 = queryNorm
            0.43455404 = fieldWeight in 3968, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.5552235 = idf(docFreq=170, maxDocs=44218)
              0.046875 = fieldNorm(doc=3968)
        0.047557916 = weight(_text_:22 in 3968) [ClassicSimilarity], result of:
          0.047557916 = score(doc=3968,freq=2.0), product of:
            0.20486678 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.058502786 = queryNorm
            0.23214069 = fieldWeight in 3968, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=3968)
    
    Abstract
    We present a declarative language for the construction of hypertext views on databases. The language is based on an object-oriented data model and a simple hypertext model with reference and inclusion links. A hypertext view specification consists in a collection of parameterized node schemes which specify how to construct node and links instances from the database contents. We show how this language can express different issues in hypertext view design. These include: the direct mapping of objects to nodes; the construction of complex nodes based on sets of objects; the representation of polymorphic sets of objects; and the representation of tree and graph structures. We have defined sublanguages corresponding to particular database models (relational, semantic, object-oriented) and implemented tools to generate Web views for these database models
    Date
    21.10.2000 15:01:22
  2. Maaten, L. van den: Accelerating t-SNE using Tree-Based Algorithms (2014) 0.24
    0.2370327 = sum of:
      0.042606562 = product of:
        0.12781969 = sum of:
          0.12781969 = weight(_text_:objects in 3886) [ClassicSimilarity], result of:
            0.12781969 = score(doc=3886,freq=2.0), product of:
              0.31094646 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.058502786 = queryNorm
              0.41106653 = fieldWeight in 3886, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3886)
        0.33333334 = coord(1/3)
      0.19442613 = product of:
        0.38885227 = sum of:
          0.38885227 = weight(_text_:tree in 3886) [ClassicSimilarity], result of:
            0.38885227 = score(doc=3886,freq=8.0), product of:
              0.38349885 = queryWeight, product of:
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.058502786 = queryNorm
              1.0139594 = fieldWeight in 3886, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3886)
        0.5 = coord(1/2)
    
    Abstract
    The paper investigates the acceleration of t-SNE-an embedding technique that is commonly used for the visualization of high-dimensional data in scatter plots-using two tree-based algorithms. In particular, the paper develops variants of the Barnes-Hut algorithm and of the dual-tree algorithm that approximate the gradient used for learning t-SNE embeddings in O(N*logN). Our experiments show that the resulting algorithms substantially accelerate t-SNE, and that they make it possible to learn embeddings of data sets with millions of objects. Somewhat counterintuitively, the Barnes-Hut variant of t-SNE appears to outperform the dual-tree variant.
  3. Colby, L.S.; Saxton, L.V.; Gucht, D.V.: Concepts for modelling and querying list-structured data (1994) 0.17
    0.16948698 = sum of:
      0.051646955 = product of:
        0.15494086 = sum of:
          0.15494086 = weight(_text_:objects in 8564) [ClassicSimilarity], result of:
            0.15494086 = score(doc=8564,freq=4.0), product of:
              0.31094646 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.058502786 = queryNorm
              0.49828792 = fieldWeight in 8564, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.046875 = fieldNorm(doc=8564)
        0.33333334 = coord(1/3)
      0.11784003 = product of:
        0.23568006 = sum of:
          0.23568006 = weight(_text_:tree in 8564) [ClassicSimilarity], result of:
            0.23568006 = score(doc=8564,freq=4.0), product of:
              0.38349885 = queryWeight, product of:
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.058502786 = queryNorm
              0.6145522 = fieldWeight in 8564, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.046875 = fieldNorm(doc=8564)
        0.5 = coord(1/2)
    
    Abstract
    Traditionally, data models and query languages have provided mechanisms for dealing with sets of objects. Many database applications, however, are list oriented (i.e. deal with collections or aggregates of objects in which ordering is important). Presents the list structured data model which has ordering as a fundamental feature. The model is based on atomic, aggregate, and list constructors and thus provides support for tree-structured and sequential representations of data. These constructors can be intermixed and allow the modelling of variable and recursive schemes. Such schemes occur naturally in list oriented data like tagged text and list processing applications is generalized to deal with the tree-structured representation and then incorporated in operations for searching, marking, updating, and restructuring list-structure instances. The operations form the core of a query language wherein users can succintly and naturally formulate complex problems typically encountered in list-oriented database applications
  4. Chen, L.-C.: Next generation search engine for the result clustering technology (2012) 0.17
    0.16810293 = product of:
      0.33620587 = sum of:
        0.33620587 = sum of:
          0.28864795 = weight(_text_:tree in 105) [ClassicSimilarity], result of:
            0.28864795 = score(doc=105,freq=6.0), product of:
              0.38349885 = queryWeight, product of:
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.058502786 = queryNorm
              0.7526697 = fieldWeight in 105, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.046875 = fieldNorm(doc=105)
          0.047557916 = weight(_text_:22 in 105) [ClassicSimilarity], result of:
            0.047557916 = score(doc=105,freq=2.0), product of:
              0.20486678 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.058502786 = queryNorm
              0.23214069 = fieldWeight in 105, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=105)
      0.5 = coord(1/2)
    
    Abstract
    Result clustering has recently attracted a lot of attention to provide the users with a succinct overview of relevant search results than traditional search engines. This chapter proposes a mixed clustering method to organize all returned search results into a hierarchical tree structure. The clustering method accomplishes two main tasks, one is label construction and the other is tree building. This chapter uses precision to measure the quality of clustering results. According to the results of experiments, the author preliminarily concluded that the performance of the system is better than many other well-known commercial and academic systems. This chapter makes several contributions. First, it presents a high performance system based on the clustering method. Second, it develops a divisive hierarchical clustering algorithm to organize all returned snippets into hierarchical tree structure. Third, it performs a wide range of experimental analyses to show that almost all commercial systems are significantly better than most current academic systems.
    Date
    17. 4.2012 15:22:11
  5. Yang, C.C.; Liu, N.: Web site topic-hierarchy generation based on link structure (2009) 0.16
    0.1586916 = product of:
      0.3173832 = sum of:
        0.3173832 = sum of:
          0.2777516 = weight(_text_:tree in 2738) [ClassicSimilarity], result of:
            0.2777516 = score(doc=2738,freq=8.0), product of:
              0.38349885 = queryWeight, product of:
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.058502786 = queryNorm
              0.7242567 = fieldWeight in 2738, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2738)
          0.039631598 = weight(_text_:22 in 2738) [ClassicSimilarity], result of:
            0.039631598 = score(doc=2738,freq=2.0), product of:
              0.20486678 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.058502786 = queryNorm
              0.19345059 = fieldWeight in 2738, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2738)
      0.5 = coord(1/2)
    
    Abstract
    Navigating through hyperlinks within a Web site to look for information from one of its Web pages without the support of a site map can be inefficient and ineffective. Although the content of a Web site is usually organized with an inherent structure like a topic hierarchy, which is a directed tree rooted at a Web site's homepage whose vertices and edges correspond to Web pages and hyperlinks, such a topic hierarchy is not always available to the user. In this work, we studied the problem of automatic generation of Web sites' topic hierarchies. We modeled a Web site's link structure as a weighted directed graph and proposed methods for estimating edge weights based on eight types of features and three learning algorithms, namely decision trees, naïve Bayes classifiers, and logistic regression. Three graph algorithms, namely breadth-first search, shortest-path search, and directed minimum-spanning tree, were adapted to generate the topic hierarchy based on the graph model. We have tested the model and algorithms on real Web sites. It is found that the directed minimum-spanning tree algorithm with the decision tree as the weight learning algorithm achieves the highest performance with an average accuracy of 91.9%.
    Date
    22. 3.2009 12:51:47
  6. Advances in classification research. Vol.4 : proceedings of the 4th ASIS SIG/CR Classification Research Workshop (1995) 0.15
    0.15435994 = sum of:
      0.036519915 = product of:
        0.10955974 = sum of:
          0.10955974 = weight(_text_:objects in 169) [ClassicSimilarity], result of:
            0.10955974 = score(doc=169,freq=2.0), product of:
              0.31094646 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.058502786 = queryNorm
              0.35234275 = fieldWeight in 169, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.046875 = fieldNorm(doc=169)
        0.33333334 = coord(1/3)
      0.11784003 = product of:
        0.23568006 = sum of:
          0.23568006 = weight(_text_:tree in 169) [ClassicSimilarity], result of:
            0.23568006 = score(doc=169,freq=4.0), product of:
              0.38349885 = queryWeight, product of:
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.058502786 = queryNorm
              0.6145522 = fieldWeight in 169, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.046875 = fieldNorm(doc=169)
        0.5 = coord(1/2)
    
    Content
    Enthält die Beiträge: ABAWAJY, J.H. u. M.A. SHEPHERD: Supporting a multi-hierarchical classification in the object-oriented paradigm; BOWKER; L.: Multidimensional classification of concepts for terminological purposes; COCHRANE, P.: Warrant for concepts in classification schemes; EUZENAT, J.: Brief overview of T-TREE: The TROPES Taxonomy Building Tool; HEMMASI, H. F. ROWLEY u. J.D. ANDERSON: Isolating and reorganizing core vocabulary from Library of Congress He... Music Thesaurus; JACOB, E.K.: Communication and category structure: the communicative process as a constant semantic representation of information; KIM, N-H., J.C. FRENCH u. D.E. BROWN: Boolean query reformulation with the Query Tree Classifier; KLEINBERG, I.: Programming knowledge: on indexing software for reuse and not indexing do...; LIN; X., MARCHIONINI, G. u. D. SOERGEL: Category-based and association-based map displays by human subjects; MINEAU, G.W.: The classification of structured knowledge objects; ZENG, L., D.K. GAPEN u. S. SCHMITT: Developing intellectual access and control mechanisms for discipline-based vi... future media integration
  7. Gudes, E.: ¬A uniform indexing scheme for object-oriented databases (1997) 0.14
    0.14280592 = product of:
      0.28561184 = sum of:
        0.28561184 = sum of:
          0.22220129 = weight(_text_:tree in 1694) [ClassicSimilarity], result of:
            0.22220129 = score(doc=1694,freq=2.0), product of:
              0.38349885 = queryWeight, product of:
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.058502786 = queryNorm
              0.57940537 = fieldWeight in 1694, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.0625 = fieldNorm(doc=1694)
          0.06341056 = weight(_text_:22 in 1694) [ClassicSimilarity], result of:
            0.06341056 = score(doc=1694,freq=2.0), product of:
              0.20486678 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.058502786 = queryNorm
              0.30952093 = fieldWeight in 1694, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1694)
      0.5 = coord(1/2)
    
    Abstract
    Proposes an uniform indexing scheme for enhancing object-oriented databases performance. It is based on a single B-tree and combines both the hierarchical and nested indexing schemes. the uniformity of this scheme enables compact and optimised code for dealing with a large range of queries on the one hand, and flexibility in adding and removing indexed paths on the other hand. Discusses the performance and presents an extensive experimental analysis for the class-hierarchy case. The results show the advantages of the scheme for small range, clustered sets queries
    Source
    Information systems. 22(1997) no.4, S.199-221
  8. Tang, X.-B.; Liu, G.-C.; Yang, J.; Wei, W.: Knowledge-based financial statement fraud detection system : based on an ontology and a decision tree (2018) 0.14
    0.14161898 = product of:
      0.28323796 = sum of:
        0.28323796 = sum of:
          0.23568006 = weight(_text_:tree in 4306) [ClassicSimilarity], result of:
            0.23568006 = score(doc=4306,freq=4.0), product of:
              0.38349885 = queryWeight, product of:
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.058502786 = queryNorm
              0.6145522 = fieldWeight in 4306, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.046875 = fieldNorm(doc=4306)
          0.047557916 = weight(_text_:22 in 4306) [ClassicSimilarity], result of:
            0.047557916 = score(doc=4306,freq=2.0), product of:
              0.20486678 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.058502786 = queryNorm
              0.23214069 = fieldWeight in 4306, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4306)
      0.5 = coord(1/2)
    
    Abstract
    Financial statement fraud has seriously affected investors' confidence in the stock market and economic stability. Several serious financial statement fraud events have caused huge economic losses. Intelligent financial statement fraud detection has thus been the topic of recent studies. In this paper, we developed a knowledge-based financial statement fraud detection system based on a financial statement detection ontology and detection rules extracted from a C4.5 decision tree algorithm. Through discovering the patterns of financial statement fraud activity, we defined the scope of our financial statement domain ontology. By utilizing SWRL rules and the Pellet inference engine in domain ontology, we detected financial statement fraud activities and discovered implicit knowledge. This system can be used to support investors' decision-making and provide early warning to regulators.
    Date
    21. 6.2018 10:22:43
  9. Proceedings of the 4th ASIS SIG/CR Classification Research Workshop, 24.20.1993 (1993) 0.13
    0.12863328 = sum of:
      0.03043326 = product of:
        0.09129978 = sum of:
          0.09129978 = weight(_text_:objects in 7066) [ClassicSimilarity], result of:
            0.09129978 = score(doc=7066,freq=2.0), product of:
              0.31094646 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.058502786 = queryNorm
              0.29361898 = fieldWeight in 7066, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0390625 = fieldNorm(doc=7066)
        0.33333334 = coord(1/3)
      0.098200016 = product of:
        0.19640003 = sum of:
          0.19640003 = weight(_text_:tree in 7066) [ClassicSimilarity], result of:
            0.19640003 = score(doc=7066,freq=4.0), product of:
              0.38349885 = queryWeight, product of:
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.058502786 = queryNorm
              0.5121268 = fieldWeight in 7066, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.0390625 = fieldNorm(doc=7066)
        0.5 = coord(1/2)
    
    Content
    Enthält die folgenden Beiträge: ABAWAJY, J.H. u. M. Shepherd: Supporting a multi-hierarchical classification in the object-oriented paradigm; AIMEUR, E. u. G. GANASCIA: Reasoning with classification in interactive knowledge elicitation; BOWKER, L.: Multidimensional classification of concepts for terminological purposes; COCHRANE, P.: Warrant for concepts in classification schemes; EUZENAT, J.: Brief overview of T-TREE: the TROPES Taxonomy building tool; HEMMASI, H., F. ROWLEY u. J.D. ANDERSON: Isolating and reorganizing core vocabulary from Library of Congress Music Headings for use in the Music Thesaurus; JACOB, E.: Comuunication and category structure: the communicative process as a constraint on the semantic representation of information; KIM, N.-H., J.C. FRENCH u. D.E. BROWN: Boolean query formulation with the query tree classifier; KLEIBERG, I.: Programming knowledge: on indexing software for reuse and not indexing documentation at all; LIN, X., G. MARCHIONINI u. D. SOERGEL: Category-based and association-based map displays by human subjects; MINEAU, G.W.: The classification of structured knowldge objects; ZENG, L., D.K. GAPEN u. S. SCHMITT: Developing intellectual access and control mechanisms for discipline-based virtual libraries that feature media integration
  10. Bertelmann, R.; Rusch-Feja, D.: Informationsretrieval im Internet : Surfen, Browsen, Suchen - mit einem Überblick über strukturierte Informationsangebote (1997) 0.12
    0.124955185 = product of:
      0.24991037 = sum of:
        0.24991037 = sum of:
          0.19442613 = weight(_text_:tree in 217) [ClassicSimilarity], result of:
            0.19442613 = score(doc=217,freq=2.0), product of:
              0.38349885 = queryWeight, product of:
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.058502786 = queryNorm
              0.5069797 = fieldWeight in 217, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.0546875 = fieldNorm(doc=217)
          0.055484235 = weight(_text_:22 in 217) [ClassicSimilarity], result of:
            0.055484235 = score(doc=217,freq=2.0), product of:
              0.20486678 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.058502786 = queryNorm
              0.2708308 = fieldWeight in 217, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=217)
      0.5 = coord(1/2)
    
    Abstract
    Das gezielte Suchen im Internet findet in erster Linie mit Hilfe der Suchmaschinen statt. Daneben gibt es aber bereits eine Fülle von strukturierten Informationsangeboten, aufbereiteten Listen und Sammelstellen, die als Clearinghouse, Subject Gateway, Subject Tree oder Resource Pages bezeichnet werden. Solche intellektuell erstellten Übersichten geben in der Regel bereits Hinweise zu Inhalt und fachlichem Niveau der Quelle. Da die Art und Weise der Aufbereitung bei den Sammelstellen sehr unterschiedlich funktioniert, ist die Kenntnis ihrer Erschließungskriterien für ein erfolgreiches Retrieval unverzichtbar
    Date
    9. 7.2000 11:31:22
  11. Trotman, A.: Searching structured documents (2004) 0.12
    0.124955185 = product of:
      0.24991037 = sum of:
        0.24991037 = sum of:
          0.19442613 = weight(_text_:tree in 2538) [ClassicSimilarity], result of:
            0.19442613 = score(doc=2538,freq=2.0), product of:
              0.38349885 = queryWeight, product of:
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.058502786 = queryNorm
              0.5069797 = fieldWeight in 2538, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2538)
          0.055484235 = weight(_text_:22 in 2538) [ClassicSimilarity], result of:
            0.055484235 = score(doc=2538,freq=2.0), product of:
              0.20486678 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.058502786 = queryNorm
              0.2708308 = fieldWeight in 2538, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2538)
      0.5 = coord(1/2)
    
    Abstract
    Structured document interchange formats such as XML and SGML are ubiquitous, however, information retrieval systems supporting structured searching are not. Structured searching can result in increased precision. A search for the author "Smith" in an unstructured corpus of documents specializing in iron-working could have a lower precision than a structured search for "Smith as author" in the same corpus. Analysis of XML retrieval languages identifies additional functionality that must be supported including searching at, and broken across multiple nodes in the document tree. A data structure is developed to support structured document searching. Application of this structure to information retrieval is then demonstrated. Document ranking is examined and adapted specifically for structured searching.
    Date
    14. 8.2004 10:39:22
  12. Yoon, Y.; Lee, C.; Lee, G.G.: ¬An effective procedure for constructing a hierarchical text classification system (2006) 0.12
    0.124955185 = product of:
      0.24991037 = sum of:
        0.24991037 = sum of:
          0.19442613 = weight(_text_:tree in 5273) [ClassicSimilarity], result of:
            0.19442613 = score(doc=5273,freq=2.0), product of:
              0.38349885 = queryWeight, product of:
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.058502786 = queryNorm
              0.5069797 = fieldWeight in 5273, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5273)
          0.055484235 = weight(_text_:22 in 5273) [ClassicSimilarity], result of:
            0.055484235 = score(doc=5273,freq=2.0), product of:
              0.20486678 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.058502786 = queryNorm
              0.2708308 = fieldWeight in 5273, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5273)
      0.5 = coord(1/2)
    
    Abstract
    In text categorization tasks, classification on some class hierarchies has better results than in cases without the hierarchy. Currently, because a large number of documents are divided into several subgroups in a hierarchy, we can appropriately use a hierarchical classification method. However, we have no systematic method to build a hierarchical classification system that performs well with large collections of practical data. In this article, we introduce a new evaluation scheme for internal node classifiers, which can be used effectively to develop a hierarchical classification system. We also show that our method for constructing the hierarchical classification system is very effective, especially for the task of constructing classifiers applied to hierarchy tree with a lot of levels.
    Date
    22. 7.2006 16:24:52
  13. Walker, T.D.: Medieval faceted knowledge classification : Ramon Llull's trees of science (1996) 0.12
    0.124955185 = product of:
      0.24991037 = sum of:
        0.24991037 = sum of:
          0.19442613 = weight(_text_:tree in 3232) [ClassicSimilarity], result of:
            0.19442613 = score(doc=3232,freq=2.0), product of:
              0.38349885 = queryWeight, product of:
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.058502786 = queryNorm
              0.5069797 = fieldWeight in 3232, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3232)
          0.055484235 = weight(_text_:22 in 3232) [ClassicSimilarity], result of:
            0.055484235 = score(doc=3232,freq=2.0), product of:
              0.20486678 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.058502786 = queryNorm
              0.2708308 = fieldWeight in 3232, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3232)
      0.5 = coord(1/2)
    
    Abstract
    Ramon Llull [auch: Raimundus Lullus] (1232-1316) wrote many didactic and theoretical works that demonstrate an exhaustive and creative approach to the organization of knowledge. His encyclopedic 'Arbre de scìencia' (1296) was a multi-volume summation of human knowledge, organized according to a plan that could be applied to other works. Set against a background of Lull's other tree-based works, including the 'Libre del gentil e dels tres savis' (1274-1289) and the 'Arbre de filosofia desiderat' (1294), the 'Arbre de scìencia' is described and analyzed as a faceted classification system
    Date
    26. 9.2010 19:02:22
  14. Indexing techniques for advanced database systems (1997) 0.12
    0.122149855 = sum of:
      0.052711956 = product of:
        0.15813586 = sum of:
          0.15813586 = weight(_text_:objects in 5961) [ClassicSimilarity], result of:
            0.15813586 = score(doc=5961,freq=6.0), product of:
              0.31094646 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.058502786 = queryNorm
              0.508563 = fieldWeight in 5961, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5961)
        0.33333334 = coord(1/3)
      0.0694379 = product of:
        0.1388758 = sum of:
          0.1388758 = weight(_text_:tree in 5961) [ClassicSimilarity], result of:
            0.1388758 = score(doc=5961,freq=2.0), product of:
              0.38349885 = queryWeight, product of:
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.058502786 = queryNorm
              0.36212835 = fieldWeight in 5961, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5961)
        0.5 = coord(1/2)
    
    Abstract
    Recent years have seen an explosive growth in the use of new database applications such as CAD/CAM systems, spatial information systems, and multimedia information systems. The needs of these applications are far more complex than traditional business applications. They call for support of objects with complex data types, such as images and spatial objects, and for support of objects with wildly varying numbers of index terms, such as documents. Traditional indexing techniques such as the B-tree and its variants do not efficiently support these applications, and so new indexing mechanisms have been developed.As a result of the demand for database support for new applications, there has been a proliferation of new indexing techniques. The need for a book addressing indexing problems in advanced applications is evident. For practitioners and database and application developers, this book explains best practice, guiding the selection of appropriate indexes for each application. For researchers, this book provides a foundation for the development of new and more robust indexes. For newcomers, this book is an overview of the wide range of advanced indexing techniques. "Indexing Techniques for Advanced Database Systems" is suitable as a secondary text for a graduate level course on indexing techniques, and as a reference for researchers and practitioners in industry.
  15. Hansson, J.: ¬The materiality of knowledge organization : epistemology, metaphors and society (2013) 0.12
    0.119845405 = sum of:
      0.036519915 = product of:
        0.10955974 = sum of:
          0.10955974 = weight(_text_:objects in 1360) [ClassicSimilarity], result of:
            0.10955974 = score(doc=1360,freq=2.0), product of:
              0.31094646 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.058502786 = queryNorm
              0.35234275 = fieldWeight in 1360, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.046875 = fieldNorm(doc=1360)
        0.33333334 = coord(1/3)
      0.08332549 = product of:
        0.16665098 = sum of:
          0.16665098 = weight(_text_:tree in 1360) [ClassicSimilarity], result of:
            0.16665098 = score(doc=1360,freq=2.0), product of:
              0.38349885 = queryWeight, product of:
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.058502786 = queryNorm
              0.43455404 = fieldWeight in 1360, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.046875 = fieldNorm(doc=1360)
        0.5 = coord(1/2)
    
    Abstract
    This article discusses the relation between epistemology, social organization and knowledge organization. Three examples are used to show how this relation has proven to be historically stable: 1) the organization of knowledge in 18th century encyclopedias; 2) the problem of bias in the international introduction of DDC in early 20th century libraries in Scandinavia; and 3) the practice of social tagging and folksonomies in contemporary late capitalist society. By using the concept of 'materiality' and the theoretical contribution on the documentality of social objects by Maurizio Ferraris, an understanding of the character of the connection between epistemology and social order in knowledge organization systems is achieved.
    Content
    Beitrag im Rahmen eines Special Issue: 'Paradigms of Knowledge and its Organization: The Tree, the Net and Beyond,' edited by Fulvio Mazzocchi and Gian Carlo Fedeli. - Vgl.: http://www.ergon-verlag.de/isko_ko/downloads/ko_40_2013_6_d.pdf.
  16. Rao, R.: ¬Der 'Hyperbolic tree' und seine Verwandten : 3D-Interfaces erleichtern den Umgang mit grossen Datenmengen (2000) 0.12
    0.11784003 = product of:
      0.23568006 = sum of:
        0.23568006 = product of:
          0.47136012 = sum of:
            0.47136012 = weight(_text_:tree in 5053) [ClassicSimilarity], result of:
              0.47136012 = score(doc=5053,freq=4.0), product of:
                0.38349885 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.058502786 = queryNorm
                1.2291044 = fieldWeight in 5053, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5053)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Object
    Hyperbolic tree
  17. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.12
    0.116696894 = sum of:
      0.092917934 = product of:
        0.2787538 = sum of:
          0.2787538 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.2787538 = score(doc=562,freq=2.0), product of:
              0.49598727 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.058502786 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.023778958 = product of:
        0.047557916 = sum of:
          0.047557916 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.047557916 = score(doc=562,freq=2.0), product of:
              0.20486678 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.058502786 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  18. Dick, S.J.: Astronomy's Three Kingdom System : a comprehensive classification system of celestial objects (2019) 0.11
    0.11295524 = sum of:
      0.085213125 = product of:
        0.25563937 = sum of:
          0.25563937 = weight(_text_:objects in 5455) [ClassicSimilarity], result of:
            0.25563937 = score(doc=5455,freq=8.0), product of:
              0.31094646 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.058502786 = queryNorm
              0.82213306 = fieldWeight in 5455, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5455)
        0.33333334 = coord(1/3)
      0.027742118 = product of:
        0.055484235 = sum of:
          0.055484235 = weight(_text_:22 in 5455) [ClassicSimilarity], result of:
            0.055484235 = score(doc=5455,freq=2.0), product of:
              0.20486678 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.058502786 = queryNorm
              0.2708308 = fieldWeight in 5455, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5455)
        0.5 = coord(1/2)
    
    Abstract
    Although classification has been an important aspect of astronomy since stellar spectroscopy in the late nineteenth century, to date no comprehensive classification system has existed for all classes of objects in the universe. Here we present such a system, and lay out its foundational definitions and principles. The system consists of the "Three Kingdoms" of planets, stars and galaxies, eighteen families, and eighty-two classes of objects. Gravitation is the defining organizing principle for the families and classes, and the physical nature of the objects is the defining characteristic of the classes. The system should prove useful for both scientific and pedagogical purposes.
    Date
    21.11.2019 18:46:22
  19. AlQenaei, Z.M.; Monarchi, D.E.: ¬The use of learning techniques to analyze the results of a manual classification system (2016) 0.11
    0.11247703 = sum of:
      0.04303913 = product of:
        0.12911738 = sum of:
          0.12911738 = weight(_text_:objects in 2836) [ClassicSimilarity], result of:
            0.12911738 = score(doc=2836,freq=4.0), product of:
              0.31094646 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.058502786 = queryNorm
              0.41523993 = fieldWeight in 2836, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2836)
        0.33333334 = coord(1/3)
      0.0694379 = product of:
        0.1388758 = sum of:
          0.1388758 = weight(_text_:tree in 2836) [ClassicSimilarity], result of:
            0.1388758 = score(doc=2836,freq=2.0), product of:
              0.38349885 = queryWeight, product of:
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.058502786 = queryNorm
              0.36212835 = fieldWeight in 2836, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2836)
        0.5 = coord(1/2)
    
    Abstract
    Classification is the process of assigning objects to pre-defined classes based on observations or characteristics of those objects, and there are many approaches to performing this task. The overall objective of this study is to demonstrate the use of two learning techniques to analyze the results of a manual classification system. Our sample consisted of 1,026 documents, from the ACM Computing Classification System, classified by their authors as belonging to one of the groups of the classification system: "H.3 Information Storage and Retrieval." A singular value decomposition of the documents' weighted term-frequency matrix was used to represent each document in a 50-dimensional vector space. The analysis of the representation using both supervised (decision tree) and unsupervised (clustering) techniques suggests that two pairs of the ACM classes are closely related to each other in the vector space. Class 1 (Content Analysis and Indexing) is closely related to Class 3 (Information Search and Retrieval), and Class 4 (Systems and Software) is closely related to Class 5 (Online Information Services). Further analysis was performed to test the diffusion of the words in the two classes using both cosine and Euclidean distance.
  20. Homeopathic thesaurus : keyterms to be used in homeopathy (2000) 0.11
    0.111100644 = product of:
      0.22220129 = sum of:
        0.22220129 = product of:
          0.44440258 = sum of:
            0.44440258 = weight(_text_:tree in 3808) [ClassicSimilarity], result of:
              0.44440258 = score(doc=3808,freq=2.0), product of:
                0.38349885 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.058502786 = queryNorm
                1.1588107 = fieldWeight in 3808, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.125 = fieldNorm(doc=3808)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Issue
    Tree structure and alphabetical list.

Languages

Types

  • a 3625
  • m 395
  • el 228
  • s 170
  • x 42
  • b 39
  • i 24
  • r 22
  • ? 8
  • n 4
  • p 4
  • d 3
  • u 2
  • z 2
  • au 1
  • h 1
  • More… Less…

Themes

Subjects

Classifications