Search (4 results, page 1 of 1)

  • × author_ss:"Metzler, D.P."
  1. Metzler, D.P.; Haas, S.W.: ¬The constituent object parser : syntactic structure matching for information retrieval (1989) 0.01
    0.008678758 = product of:
      0.0607513 = sum of:
        0.017741129 = weight(_text_:information in 3607) [ClassicSimilarity], result of:
          0.017741129 = score(doc=3607,freq=6.0), product of:
            0.052810486 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030083254 = queryNorm
            0.3359395 = fieldWeight in 3607, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=3607)
        0.04301017 = weight(_text_:retrieval in 3607) [ClassicSimilarity], result of:
          0.04301017 = score(doc=3607,freq=4.0), product of:
            0.09099928 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030083254 = queryNorm
            0.47264296 = fieldWeight in 3607, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.078125 = fieldNorm(doc=3607)
      0.14285715 = coord(2/14)
    
    Abstract
    The constituent object parser is designed to improve the precision and recall performance of information retrieval by providing more powerful matching procedures. Describes the dependency tree representations and the relationship between the intended use of the parser and its design.
    Source
    ACM transactions on information systems. 7(1989) no.3, S.292-316
  2. Metzler, D.P.; Haas, S.W.; Cosic, C.L.; Wheeler, L.H.: Constituent object parsing for information retrieval and similar text processing problems (1989) 0.01
    0.006943006 = product of:
      0.04860104 = sum of:
        0.0141929025 = weight(_text_:information in 2858) [ClassicSimilarity], result of:
          0.0141929025 = score(doc=2858,freq=6.0), product of:
            0.052810486 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030083254 = queryNorm
            0.2687516 = fieldWeight in 2858, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=2858)
        0.034408137 = weight(_text_:retrieval in 2858) [ClassicSimilarity], result of:
          0.034408137 = score(doc=2858,freq=4.0), product of:
            0.09099928 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030083254 = queryNorm
            0.37811437 = fieldWeight in 2858, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=2858)
      0.14285715 = coord(2/14)
    
    Abstract
    Describes the architecture and functioning of the Constituent Object Parser. This system has been developed specially for text processing applications such as information retrieval, which can benefit from structural comparisons between elements of text such as a query and a potentially relevant abstract. Describes the general way in which this objective influenced the design of the system.
    Source
    Journal of the American Society for Information Science. 40(1989) no.6, S.398-423
  3. Metzler, D.P.: Connectionist and symbolic information processing : a critical analysis and suggested research agenda for connectionism from the symbolic perspective (1990) 0.00
    0.0011706108 = product of:
      0.01638855 = sum of:
        0.01638855 = weight(_text_:information in 4903) [ClassicSimilarity], result of:
          0.01638855 = score(doc=4903,freq=8.0), product of:
            0.052810486 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030083254 = queryNorm
            0.3103276 = fieldWeight in 4903, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=4903)
      0.071428575 = coord(1/14)
    
    Imprint
    Medford, NJ : Learned Information Inc.
    Source
    ASIS'90: Information in the year 2000, from research to applications. Proc. of the 53rd Annual Meeting of the American Society for Information Science, Toronto, Canada, 4.-8.11.1990. Ed. by Diana Henderson
  4. Kolluri, V.; Metzler, D.P.: Knowledge guided rule learning (1999) 0.00
    6.5439136E-4 = product of:
      0.009161479 = sum of:
        0.009161479 = weight(_text_:information in 6550) [ClassicSimilarity], result of:
          0.009161479 = score(doc=6550,freq=10.0), product of:
            0.052810486 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030083254 = queryNorm
            0.1734784 = fieldWeight in 6550, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=6550)
      0.071428575 = coord(1/14)
    
    Abstract
    Rule learning algorithms, developed by traditional supervised machine learning research community, are being used as data analysis tools for generating accurate concept definitions, given a set of instances (pre-classified) and a goal-task (concept class). Most rule learners use straightforward data driven approaches using information theoretic principles to search for statistically defined "interesting" patterns in the data sets. There are two main drawbacks with such purely data driven approaches. First, they perform poorly when insufficient data is available. Second, when large training data sets are available they tend to generate many uninteresting patterns from data sets, and usually it is left to the domain expert to distinguish the "useful" pieces of information from the rest. The size of this problem (a data mining issue onto itself) suggests the need to guide the learning system's search to relevant sub spaces within the space of all possible hypotheses. This paper explores the utility of using prior domain knowledge (in the form of taxonomies over attributes, attribute values and concept classes) to constrain the rule learner's search by requiring it to be consistent with what is already known about the domain. Spreading Activation Learning (SAL) using marker propagation techniques introduced by Aronis and Provost (1994) is used to efficiently learn over taxonomically structured attributes and attribute values. An extension to the SAL methodology to handle rule learning over concept class values is presented. By representing the range of numeric (continuous) values for attributes in the form of simplified IS A taxonomies, the SAL methodology is shown to be capable of handling numeric (continuous) attribute values. Large taxonomies over value sets (especially taxonomies over numeric value sets) usually result in too many redundant rules. This problem can be addressed by pruning the rule set using "rule interest" measures. The focus of this study is to explore the utility of taxonomic structures in rule learning and, in particular the use of taxonomic structures as a way of incorporating background knowledge in the rule learning process. Initial results obtained from an ongoing research work are presented
    Imprint
    Medford, NJ : Information Today
    Series
    Proceedings of the American Society for Information Science; vol.36
    Source
    Knowledge: creation, organization and use. Proceedings of the 62nd Annual Meeting of the American Society for Information Science, 31.10.-4.11.1999. Ed.: L. Woods