Search (3798 results, page 1 of 190)

  1. Chen, L.-C.: Next generation search engine for the result clustering technology (2012) 0.14
    0.14353731 = product of:
      0.28707463 = sum of:
        0.28707463 = sum of:
          0.24646656 = weight(_text_:tree in 105) [ClassicSimilarity], result of:
            0.24646656 = score(doc=105,freq=6.0), product of:
              0.32745647 = queryWeight, product of:
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.049953517 = queryNorm
              0.7526697 = fieldWeight in 105, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.046875 = fieldNorm(doc=105)
          0.04060807 = weight(_text_:22 in 105) [ClassicSimilarity], result of:
            0.04060807 = score(doc=105,freq=2.0), product of:
              0.17492871 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049953517 = queryNorm
              0.23214069 = fieldWeight in 105, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=105)
      0.5 = coord(1/2)
    
    Abstract
    Result clustering has recently attracted a lot of attention to provide the users with a succinct overview of relevant search results than traditional search engines. This chapter proposes a mixed clustering method to organize all returned search results into a hierarchical tree structure. The clustering method accomplishes two main tasks, one is label construction and the other is tree building. This chapter uses precision to measure the quality of clustering results. According to the results of experiments, the author preliminarily concluded that the performance of the system is better than many other well-known commercial and academic systems. This chapter makes several contributions. First, it presents a high performance system based on the clustering method. Second, it develops a divisive hierarchical clustering algorithm to organize all returned snippets into hierarchical tree structure. Third, it performs a wide range of experimental analyses to show that almost all commercial systems are significantly better than most current academic systems.
    Date
    17. 4.2012 15:22:11
  2. Yang, C.C.; Liu, N.: Web site topic-hierarchy generation based on link structure (2009) 0.14
    0.1355013 = product of:
      0.2710026 = sum of:
        0.2710026 = sum of:
          0.23716255 = weight(_text_:tree in 2738) [ClassicSimilarity], result of:
            0.23716255 = score(doc=2738,freq=8.0), product of:
              0.32745647 = queryWeight, product of:
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.049953517 = queryNorm
              0.7242567 = fieldWeight in 2738, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2738)
          0.03384006 = weight(_text_:22 in 2738) [ClassicSimilarity], result of:
            0.03384006 = score(doc=2738,freq=2.0), product of:
              0.17492871 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049953517 = queryNorm
              0.19345059 = fieldWeight in 2738, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2738)
      0.5 = coord(1/2)
    
    Abstract
    Navigating through hyperlinks within a Web site to look for information from one of its Web pages without the support of a site map can be inefficient and ineffective. Although the content of a Web site is usually organized with an inherent structure like a topic hierarchy, which is a directed tree rooted at a Web site's homepage whose vertices and edges correspond to Web pages and hyperlinks, such a topic hierarchy is not always available to the user. In this work, we studied the problem of automatic generation of Web sites' topic hierarchies. We modeled a Web site's link structure as a weighted directed graph and proposed methods for estimating edge weights based on eight types of features and three learning algorithms, namely decision trees, naïve Bayes classifiers, and logistic regression. Three graph algorithms, namely breadth-first search, shortest-path search, and directed minimum-spanning tree, were adapted to generate the topic hierarchy based on the graph model. We have tested the model and algorithms on real Web sites. It is found that the directed minimum-spanning tree algorithm with the decision tree as the weight learning algorithm achieves the highest performance with an average accuracy of 91.9%.
    Date
    22. 3.2009 12:51:47
  3. Gudes, E.: ¬A uniform indexing scheme for object-oriented databases (1997) 0.12
    0.12193707 = product of:
      0.24387413 = sum of:
        0.24387413 = sum of:
          0.18973003 = weight(_text_:tree in 1694) [ClassicSimilarity], result of:
            0.18973003 = score(doc=1694,freq=2.0), product of:
              0.32745647 = queryWeight, product of:
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.049953517 = queryNorm
              0.57940537 = fieldWeight in 1694, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.0625 = fieldNorm(doc=1694)
          0.054144096 = weight(_text_:22 in 1694) [ClassicSimilarity], result of:
            0.054144096 = score(doc=1694,freq=2.0), product of:
              0.17492871 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049953517 = queryNorm
              0.30952093 = fieldWeight in 1694, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1694)
      0.5 = coord(1/2)
    
    Abstract
    Proposes an uniform indexing scheme for enhancing object-oriented databases performance. It is based on a single B-tree and combines both the hierarchical and nested indexing schemes. the uniformity of this scheme enables compact and optimised code for dealing with a large range of queries on the one hand, and flexibility in adding and removing indexed paths on the other hand. Discusses the performance and presents an extensive experimental analysis for the class-hierarchy case. The results show the advantages of the scheme for small range, clustered sets queries
    Source
    Information systems. 22(1997) no.4, S.199-221
  4. Tang, X.-B.; Liu, G.-C.; Yang, J.; Wei, W.: Knowledge-based financial statement fraud detection system : based on an ontology and a decision tree (2018) 0.12
    0.12092358 = product of:
      0.24184716 = sum of:
        0.24184716 = sum of:
          0.2012391 = weight(_text_:tree in 4306) [ClassicSimilarity], result of:
            0.2012391 = score(doc=4306,freq=4.0), product of:
              0.32745647 = queryWeight, product of:
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.049953517 = queryNorm
              0.6145522 = fieldWeight in 4306, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.046875 = fieldNorm(doc=4306)
          0.04060807 = weight(_text_:22 in 4306) [ClassicSimilarity], result of:
            0.04060807 = score(doc=4306,freq=2.0), product of:
              0.17492871 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049953517 = queryNorm
              0.23214069 = fieldWeight in 4306, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4306)
      0.5 = coord(1/2)
    
    Abstract
    Financial statement fraud has seriously affected investors' confidence in the stock market and economic stability. Several serious financial statement fraud events have caused huge economic losses. Intelligent financial statement fraud detection has thus been the topic of recent studies. In this paper, we developed a knowledge-based financial statement fraud detection system based on a financial statement detection ontology and detection rules extracted from a C4.5 decision tree algorithm. Through discovering the patterns of financial statement fraud activity, we defined the scope of our financial statement domain ontology. By utilizing SWRL rules and the Pellet inference engine in domain ontology, we detected financial statement fraud activities and discovered implicit knowledge. This system can be used to support investors' decision-making and provide early warning to regulators.
    Date
    21. 6.2018 10:22:43
  5. Bertelmann, R.; Rusch-Feja, D.: Informationsretrieval im Internet : Surfen, Browsen, Suchen - mit einem Überblick über strukturierte Informationsangebote (1997) 0.11
    0.10669494 = product of:
      0.21338987 = sum of:
        0.21338987 = sum of:
          0.16601379 = weight(_text_:tree in 217) [ClassicSimilarity], result of:
            0.16601379 = score(doc=217,freq=2.0), product of:
              0.32745647 = queryWeight, product of:
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.049953517 = queryNorm
              0.5069797 = fieldWeight in 217, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.0546875 = fieldNorm(doc=217)
          0.047376085 = weight(_text_:22 in 217) [ClassicSimilarity], result of:
            0.047376085 = score(doc=217,freq=2.0), product of:
              0.17492871 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049953517 = queryNorm
              0.2708308 = fieldWeight in 217, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=217)
      0.5 = coord(1/2)
    
    Abstract
    Das gezielte Suchen im Internet findet in erster Linie mit Hilfe der Suchmaschinen statt. Daneben gibt es aber bereits eine Fülle von strukturierten Informationsangeboten, aufbereiteten Listen und Sammelstellen, die als Clearinghouse, Subject Gateway, Subject Tree oder Resource Pages bezeichnet werden. Solche intellektuell erstellten Übersichten geben in der Regel bereits Hinweise zu Inhalt und fachlichem Niveau der Quelle. Da die Art und Weise der Aufbereitung bei den Sammelstellen sehr unterschiedlich funktioniert, ist die Kenntnis ihrer Erschließungskriterien für ein erfolgreiches Retrieval unverzichtbar
    Date
    9. 7.2000 11:31:22
  6. Trotman, A.: Searching structured documents (2004) 0.11
    0.10669494 = product of:
      0.21338987 = sum of:
        0.21338987 = sum of:
          0.16601379 = weight(_text_:tree in 2538) [ClassicSimilarity], result of:
            0.16601379 = score(doc=2538,freq=2.0), product of:
              0.32745647 = queryWeight, product of:
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.049953517 = queryNorm
              0.5069797 = fieldWeight in 2538, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2538)
          0.047376085 = weight(_text_:22 in 2538) [ClassicSimilarity], result of:
            0.047376085 = score(doc=2538,freq=2.0), product of:
              0.17492871 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049953517 = queryNorm
              0.2708308 = fieldWeight in 2538, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2538)
      0.5 = coord(1/2)
    
    Abstract
    Structured document interchange formats such as XML and SGML are ubiquitous, however, information retrieval systems supporting structured searching are not. Structured searching can result in increased precision. A search for the author "Smith" in an unstructured corpus of documents specializing in iron-working could have a lower precision than a structured search for "Smith as author" in the same corpus. Analysis of XML retrieval languages identifies additional functionality that must be supported including searching at, and broken across multiple nodes in the document tree. A data structure is developed to support structured document searching. Application of this structure to information retrieval is then demonstrated. Document ranking is examined and adapted specifically for structured searching.
    Date
    14. 8.2004 10:39:22
  7. Yoon, Y.; Lee, C.; Lee, G.G.: ¬An effective procedure for constructing a hierarchical text classification system (2006) 0.11
    0.10669494 = product of:
      0.21338987 = sum of:
        0.21338987 = sum of:
          0.16601379 = weight(_text_:tree in 5273) [ClassicSimilarity], result of:
            0.16601379 = score(doc=5273,freq=2.0), product of:
              0.32745647 = queryWeight, product of:
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.049953517 = queryNorm
              0.5069797 = fieldWeight in 5273, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5273)
          0.047376085 = weight(_text_:22 in 5273) [ClassicSimilarity], result of:
            0.047376085 = score(doc=5273,freq=2.0), product of:
              0.17492871 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049953517 = queryNorm
              0.2708308 = fieldWeight in 5273, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5273)
      0.5 = coord(1/2)
    
    Abstract
    In text categorization tasks, classification on some class hierarchies has better results than in cases without the hierarchy. Currently, because a large number of documents are divided into several subgroups in a hierarchy, we can appropriately use a hierarchical classification method. However, we have no systematic method to build a hierarchical classification system that performs well with large collections of practical data. In this article, we introduce a new evaluation scheme for internal node classifiers, which can be used effectively to develop a hierarchical classification system. We also show that our method for constructing the hierarchical classification system is very effective, especially for the task of constructing classifiers applied to hierarchy tree with a lot of levels.
    Date
    22. 7.2006 16:24:52
  8. Walker, T.D.: Medieval faceted knowledge classification : Ramon Llull's trees of science (1996) 0.11
    0.10669494 = product of:
      0.21338987 = sum of:
        0.21338987 = sum of:
          0.16601379 = weight(_text_:tree in 3232) [ClassicSimilarity], result of:
            0.16601379 = score(doc=3232,freq=2.0), product of:
              0.32745647 = queryWeight, product of:
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.049953517 = queryNorm
              0.5069797 = fieldWeight in 3232, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3232)
          0.047376085 = weight(_text_:22 in 3232) [ClassicSimilarity], result of:
            0.047376085 = score(doc=3232,freq=2.0), product of:
              0.17492871 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049953517 = queryNorm
              0.2708308 = fieldWeight in 3232, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3232)
      0.5 = coord(1/2)
    
    Abstract
    Ramon Llull [auch: Raimundus Lullus] (1232-1316) wrote many didactic and theoretical works that demonstrate an exhaustive and creative approach to the organization of knowledge. His encyclopedic 'Arbre de scìencia' (1296) was a multi-volume summation of human knowledge, organized according to a plan that could be applied to other works. Set against a background of Lull's other tree-based works, including the 'Libre del gentil e dels tres savis' (1274-1289) and the 'Arbre de filosofia desiderat' (1294), the 'Arbre de scìencia' is described and analyzed as a faceted classification system
    Date
    26. 9.2010 19:02:22
  9. Rao, R.: ¬Der 'Hyperbolic tree' und seine Verwandten : 3D-Interfaces erleichtern den Umgang mit grossen Datenmengen (2000) 0.10
    0.10061955 = product of:
      0.2012391 = sum of:
        0.2012391 = product of:
          0.4024782 = sum of:
            0.4024782 = weight(_text_:tree in 5053) [ClassicSimilarity], result of:
              0.4024782 = score(doc=5053,freq=4.0), product of:
                0.32745647 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.049953517 = queryNorm
                1.2291044 = fieldWeight in 5053, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5053)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Object
    Hyperbolic tree
  10. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.10
    0.09964347 = sum of:
      0.07933943 = product of:
        0.23801827 = sum of:
          0.23801827 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.23801827 = score(doc=562,freq=2.0), product of:
              0.42350647 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.049953517 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.020304035 = product of:
        0.04060807 = sum of:
          0.04060807 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.04060807 = score(doc=562,freq=2.0), product of:
              0.17492871 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049953517 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  11. Homeopathic thesaurus : keyterms to be used in homeopathy (2000) 0.09
    0.09486502 = product of:
      0.18973003 = sum of:
        0.18973003 = product of:
          0.37946007 = sum of:
            0.37946007 = weight(_text_:tree in 3808) [ClassicSimilarity], result of:
              0.37946007 = score(doc=3808,freq=2.0), product of:
                0.32745647 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.049953517 = queryNorm
                1.1588107 = fieldWeight in 3808, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.125 = fieldNorm(doc=3808)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Issue
    Tree structure and alphabetical list.
  12. French, J.C.; Brown, D.E.; Kim, N.-H.: ¬A classification approach to Boolean query reformulation (1997) 0.09
    0.094120964 = product of:
      0.18824193 = sum of:
        0.18824193 = product of:
          0.37648386 = sum of:
            0.37648386 = weight(_text_:tree in 197) [ClassicSimilarity], result of:
              0.37648386 = score(doc=197,freq=14.0), product of:
                0.32745647 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.049953517 = queryNorm
                1.1497219 = fieldWeight in 197, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.046875 = fieldNorm(doc=197)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    One of the difficulties in using current Boolean-based information retrieval systems is that it is hard for a user, especially a novice, to formulate an effective Boolean query. Query reformulation can be even more difficult and complex than formulation since users often have difficulty incorporating the new information gained from the previous search into the next query. In this article, query reformulation is viewed as a classification problem, that is, classifying documents as either relevant or non relevant. A new reformulation algorithm is proposed which builds a tree-structure classifier, called a query tree, at each reformulation from a set of feedback documents retrieved from the previous search. The query tree can easily be transformed into a Boolean query. The query tree is compared to two query reformulation algorithms on benchmark test sets (CACM, CISI, and Medlars). In most experiments, the query tree showed significant improvements in precision over the 2 algorithms compared in this study. We attribute this improved performance to the ability of the query tree algorithm to select good search terms and to represent the relationships among search terms into a tree structure
  13. Nicholson, D.; Steele, M.: CATRIONA: a distributed, locally-oriented. Z39.50 OPAC-based approach to cataloguing the Internet (1996) 0.09
    0.09145281 = product of:
      0.18290561 = sum of:
        0.18290561 = sum of:
          0.14229754 = weight(_text_:tree in 6734) [ClassicSimilarity], result of:
            0.14229754 = score(doc=6734,freq=2.0), product of:
              0.32745647 = queryWeight, product of:
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.049953517 = queryNorm
              0.43455404 = fieldWeight in 6734, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.046875 = fieldNorm(doc=6734)
          0.04060807 = weight(_text_:22 in 6734) [ClassicSimilarity], result of:
            0.04060807 = score(doc=6734,freq=2.0), product of:
              0.17492871 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049953517 = queryNorm
              0.23214069 = fieldWeight in 6734, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=6734)
      0.5 = coord(1/2)
    
    Abstract
    Describes the origins of the CATaloguing and Retrieval of Information Over Network Applications (CATRIONA) Study in the BUBL Subject Tree service and nots its aims: to investigate the requirements for developing procedures and applications for cataloguing and retrieval of networked resources (particularly via the Internet); and to explore the feasibility of a collaborative project to develop and integrate them with existing library systems. The project established that a distributed catalogue of networked resources integrated with standard Z39.50 library system OPAC interfaces with information on hard copy resources is already a practical proposition at a basic level. Notes that at least one Z30.50 OPAC client can searcg remote Z39.50 OPACs, retrieve USMARC records with URLs in MARC field 856
    Series
    Cataloging and classification quarterly; vol.22, nos.3/4
  14. Falquet, G.; Guyot, J.; Nerima, L.: Languages and tools to specify hypertext views on databases (1999) 0.09
    0.09145281 = product of:
      0.18290561 = sum of:
        0.18290561 = sum of:
          0.14229754 = weight(_text_:tree in 3968) [ClassicSimilarity], result of:
            0.14229754 = score(doc=3968,freq=2.0), product of:
              0.32745647 = queryWeight, product of:
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.049953517 = queryNorm
              0.43455404 = fieldWeight in 3968, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.046875 = fieldNorm(doc=3968)
          0.04060807 = weight(_text_:22 in 3968) [ClassicSimilarity], result of:
            0.04060807 = score(doc=3968,freq=2.0), product of:
              0.17492871 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049953517 = queryNorm
              0.23214069 = fieldWeight in 3968, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3968)
      0.5 = coord(1/2)
    
    Abstract
    We present a declarative language for the construction of hypertext views on databases. The language is based on an object-oriented data model and a simple hypertext model with reference and inclusion links. A hypertext view specification consists in a collection of parameterized node schemes which specify how to construct node and links instances from the database contents. We show how this language can express different issues in hypertext view design. These include: the direct mapping of objects to nodes; the construction of complex nodes based on sets of objects; the representation of polymorphic sets of objects; and the representation of tree and graph structures. We have defined sublanguages corresponding to particular database models (relational, semantic, object-oriented) and implemented tools to generate Web views for these database models
    Date
    21.10.2000 15:01:22
  15. Egbert, J.; Biber, D.; Davies, M.: Developing a bottom-up, user-based method of web register classification (2015) 0.09
    0.09145281 = product of:
      0.18290561 = sum of:
        0.18290561 = sum of:
          0.14229754 = weight(_text_:tree in 2158) [ClassicSimilarity], result of:
            0.14229754 = score(doc=2158,freq=2.0), product of:
              0.32745647 = queryWeight, product of:
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.049953517 = queryNorm
              0.43455404 = fieldWeight in 2158, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.046875 = fieldNorm(doc=2158)
          0.04060807 = weight(_text_:22 in 2158) [ClassicSimilarity], result of:
            0.04060807 = score(doc=2158,freq=2.0), product of:
              0.17492871 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049953517 = queryNorm
              0.23214069 = fieldWeight in 2158, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2158)
      0.5 = coord(1/2)
    
    Abstract
    This paper introduces a project to develop a reliable, cost-effective method for classifying Internet texts into register categories, and apply that approach to the analysis of a large corpus of web documents. To date, the project has proceeded in 2 key phases. First, we developed a bottom-up method for web register classification, asking end users of the web to utilize a decision-tree survey to code relevant situational characteristics of web documents, resulting in a bottom-up identification of register and subregister categories. We present details regarding the development and testing of this method through a series of 10 pilot studies. Then, in the second phase of our project we applied this procedure to a corpus of 53,000 web documents. An analysis of the results demonstrates the effectiveness of these methods for web register classification and provides a preliminary description of the types and distribution of registers on the web.
    Date
    4. 8.2015 19:22:04
  16. Mazzocchi, F.; Fedeli, G.C.: Introduction to the special issue: 'Paradigms of Knowledge and its Organization: The Tree, the Net and Beyond' (2013) 0.08
    0.08384962 = product of:
      0.16769923 = sum of:
        0.16769923 = product of:
          0.33539847 = sum of:
            0.33539847 = weight(_text_:tree in 1357) [ClassicSimilarity], result of:
              0.33539847 = score(doc=1357,freq=4.0), product of:
                0.32745647 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.049953517 = queryNorm
                1.0242536 = fieldWeight in 1357, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1357)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Beitrag im Rahmen eines Special Issue: 'Paradigms of Knowledge and its Organization: The Tree, the Net and Beyond,' edited by Fulvio Mazzocchi and Gian Carlo Fedeli. - Vgl.: http://www.ergon-verlag.de/isko_ko/downloads/ko_40_2013_6_a.pdf.
  17. Fachsystematik Bremen nebst Schlüssel 1970 ff. (1970 ff) 0.08
    0.08303622 = sum of:
      0.06611619 = product of:
        0.19834857 = sum of:
          0.19834857 = weight(_text_:3a in 3577) [ClassicSimilarity], result of:
            0.19834857 = score(doc=3577,freq=2.0), product of:
              0.42350647 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.049953517 = queryNorm
              0.46834838 = fieldWeight in 3577, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3577)
        0.33333334 = coord(1/3)
      0.01692003 = product of:
        0.03384006 = sum of:
          0.03384006 = weight(_text_:22 in 3577) [ClassicSimilarity], result of:
            0.03384006 = score(doc=3577,freq=2.0), product of:
              0.17492871 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049953517 = queryNorm
              0.19345059 = fieldWeight in 3577, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3577)
        0.5 = coord(1/2)
    
    Content
    1. Agrarwissenschaften 1981. - 3. Allgemeine Geographie 2.1972. - 3a. Allgemeine Naturwissenschaften 1.1973. - 4. Allgemeine Sprachwissenschaft, Allgemeine Literaturwissenschaft 2.1971. - 6. Allgemeines. 5.1983. - 7. Anglistik 3.1976. - 8. Astronomie, Geodäsie 4.1977. - 12. bio Biologie, bcp Biochemie-Biophysik, bot Botanik, zoo Zoologie 1981. - 13. Bremensien 3.1983. - 13a. Buch- und Bibliothekswesen 3.1975. - 14. Chemie 4.1977. - 14a. Elektrotechnik 1974. - 15 Ethnologie 2.1976. - 16,1. Geowissenschaften. Sachteil 3.1977. - 16,2. Geowissenschaften. Regionaler Teil 3.1977. - 17. Germanistik 6.1984. - 17a,1. Geschichte. Teilsystematik hil. - 17a,2. Geschichte. Teilsystematik his Neuere Geschichte. - 17a,3. Geschichte. Teilsystematik hit Neueste Geschichte. - 18. Humanbiologie 2.1983. - 19. Ingenieurwissenschaften 1974. - 20. siehe 14a. - 21. klassische Philologie 3.1977. - 22. Klinische Medizin 1975. - 23. Kunstgeschichte 2.1971. - 24. Kybernetik. 2.1975. - 25. Mathematik 3.1974. - 26. Medizin 1976. - 26a. Militärwissenschaft 1985. - 27. Musikwissenschaft 1978. - 27a. Noten 2.1974. - 28. Ozeanographie 3.1977. -29. Pädagogik 8.1985. - 30. Philosphie 3.1974. - 31. Physik 3.1974. - 33. Politik, Politische Wissenschaft, Sozialwissenschaft. Soziologie. Länderschlüssel. Register 1981. - 34. Psychologie 2.1972. - 35. Publizistik und Kommunikationswissenschaft 1985. - 36. Rechtswissenschaften 1986. - 37. Regionale Geograpgie 3.1975. - 37a. Religionswissenschaft 1970. - 38. Romanistik 3.1976. - 39. Skandinavistik 4.1985. - 40. Slavistik 1977. - 40a. Sonstige Sprachen und Literaturen 1973. - 43. Sport 4.1983. - 44. Theaterwissenschaft 1985. - 45. Theologie 2.1976. - 45a. Ur- und Frühgeschichte, Archäologie 1970. - 47. Volkskunde 1976. - 47a. Wirtschaftswissenschaften 1971 // Schlüssel: 1. Länderschlüssel 1971. - 2. Formenschlüssel (Kurzform) 1974. - 3. Personenschlüssel Literatur 5. Fassung 1968
  18. Koyama, M.: ¬A fast retrieving algorithm of hierarchical relationships using tree structures (1998) 0.08
    0.083006896 = product of:
      0.16601379 = sum of:
        0.16601379 = product of:
          0.33202758 = sum of:
            0.33202758 = weight(_text_:tree in 6403) [ClassicSimilarity], result of:
              0.33202758 = score(doc=6403,freq=2.0), product of:
                0.32745647 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.049953517 = queryNorm
                1.0139594 = fieldWeight in 6403, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6403)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  19. Siva, S.: Document identification and classification using transform coding of gray scale projections and neural tree network (2000) 0.08
    0.083006896 = product of:
      0.16601379 = sum of:
        0.16601379 = product of:
          0.33202758 = sum of:
            0.33202758 = weight(_text_:tree in 1970) [ClassicSimilarity], result of:
              0.33202758 = score(doc=1970,freq=2.0), product of:
                0.32745647 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.049953517 = queryNorm
                1.0139594 = fieldWeight in 1970, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1970)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  20. Maaten, L. van den: Accelerating t-SNE using Tree-Based Algorithms (2014) 0.08
    0.083006896 = product of:
      0.16601379 = sum of:
        0.16601379 = product of:
          0.33202758 = sum of:
            0.33202758 = weight(_text_:tree in 3886) [ClassicSimilarity], result of:
              0.33202758 = score(doc=3886,freq=8.0), product of:
                0.32745647 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.049953517 = queryNorm
                1.0139594 = fieldWeight in 3886, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3886)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The paper investigates the acceleration of t-SNE-an embedding technique that is commonly used for the visualization of high-dimensional data in scatter plots-using two tree-based algorithms. In particular, the paper develops variants of the Barnes-Hut algorithm and of the dual-tree algorithm that approximate the gradient used for learning t-SNE embeddings in O(N*logN). Our experiments show that the resulting algorithms substantially accelerate t-SNE, and that they make it possible to learn embeddings of data sets with millions of objects. Somewhat counterintuitively, the Barnes-Hut variant of t-SNE appears to outperform the dual-tree variant.

Languages

Types

  • a 3172
  • m 360
  • el 173
  • s 150
  • b 39
  • x 37
  • i 24
  • r 17
  • ? 8
  • p 4
  • d 3
  • n 3
  • u 2
  • z 2
  • au 1
  • h 1
  • More… Less…

Themes

Subjects

Classifications