Search (243 results, page 13 of 13)

  • × type_ss:"s"
  1. XML data management : native XML and XML-enabled database systems (2003) 0.00
    0.002681145 = product of:
      0.0080434345 = sum of:
        0.0080434345 = product of:
          0.016086869 = sum of:
            0.016086869 = weight(_text_:indexing in 2073) [ClassicSimilarity], result of:
              0.016086869 = score(doc=2073,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.08458473 = fieldWeight in 2073, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2073)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Footnote
    After several detailed examples of XML, Direen and Jones discuss sequence comparisons. The ability to create scored comparisons by such techniques as sequence alignment is fundamental to bioinformatics. For example, the function of a gene product may be inferred from similarity with a gene of known function but originating from a different organism and any information modeling method must facilitate such comparisons. One such comparison tool, BLAST utilizes a heuristic method has become the tool of choice for many years and is integrated into the NeoCore XMS (XML Management System) described herein. Any set of sequences that can be identified using an XPath query may thus become the targets of an embedded search. Again examples are given, though a BLASTp (protein) search is labeled as being BLASTn (nucleotide sequence) in one of them. Some variants of BLAST are computationally intensive, e.g., tBLASTx where a nucleotide sequence is dynamically translated in all six reading frames and compared against similarly translated database sequences. Though these variants are implemented in NeoCore XMS, it would be interesting to see runtimes for such comparisons. Obviously the utility of this and the other four quite specific examples will depend an your interest in the application area but two that are more research-oriented and general follow them. These chapters (on using XML with inductive databases and an XML warehouses) are both readable critical reviews of their respective subject areas. For those involved in the implementation of performance-critical applications an examination of benchmark results is mandatory, however very few would examine the benchmark tests themselves. The picture that emerges from this section is that no single set is comprehensive and that some functionalities are not addressed by any available benchmark. As always, there is no Substitute for an intimate knowledge of your data and how it is used. In a direct comparison of an XML-enabled and a native XML database system (unfortunately neither is named), the authors conclude that though the native system has the edge in handling large documents this comes at the expense of increasing index and data file size. The need to use legacy data and software will certainly favor the all-pervasive XML-enabled RDBMS such as Oracle 9i and IBM's DB2. Of more general utility is the chapter by Schmauch and Fellhauer comparing the approaches used by database systems for the storing of XML documents. Many of the limitations of current XML-handling systems may be traced to problems caused by the semi-structured nature of the documents and while the authors have no panacea, the chapter forms a useful discussion of the issues and even raises the ugly prospect that a return to the drawing board may be unavoidable. The book concludes with an appraisal of the current status of XML by the editors that perhaps focuses a little too little an the database side but overall I believe this book to be very useful indeed. Some of the indexing is a little idiosyncratic, for example some tags used in the examples are indexed (perhaps a separate examples index would be better) and Ron Bourret's excellent web site might be better placed under "Bourret" rather than under "Ron" but this doesn't really detract from the book's qualities. The broad spectrum and careful balance of theory and practice is a combination that both database and XML professionals will find valuable."
  2. Current theory in library and information science (2002) 0.00
    0.002681145 = product of:
      0.0080434345 = sum of:
        0.0080434345 = product of:
          0.016086869 = sum of:
            0.016086869 = weight(_text_:indexing in 822) [ClassicSimilarity], result of:
              0.016086869 = score(doc=822,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.08458473 = fieldWeight in 822, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.015625 = fieldNorm(doc=822)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Footnote
    However, for well over a century, major libraries in developed nations have been engaging in sophisticated measure of their operations, and thoughtful scholars have been involved along the way; if no "unified theory" has emerged thus far, why would it happen in the near future? What if "libraries" are a historicallydetermined conglomeration of distinct functions, some of which are much less important than others? It is telling that McGrath cites as many studies an brittle paper as he does investigations of reference services among his constellation of measurable services, even while acknowledging that the latter (as an aspect of "circulation") is more "essential." If one were to include in a unified theory similar phenomena outside of libraries-e.g., what happens in bookstores and WWW searches-it can be seen how difficult a coordinated explanation might become. Ultimately the value of McGrath's chapter is not in convincing the reader that a unified theory might emerge, but rather in highlighting the best in recent studies that examine library operations, identifying robust conclusions, and arguing for the necessity of clarifying and coordinating common variables and units of analysis. McGrath's article is one that would be useful for a general course in LIS methodology, and certainly for more specific lectures an the evaluation of libraries. Fra going to focus most of my comments an the remaining articles about theory, rather than the others that offer empirical results about the growth or quality of literature. I'll describe the latter only briefly. The best way to approach this issue is by first reading McKechnie and Pettigrew's thorough survey of the "Use of Theory in LIS research." Earlier results of their extensive content analysis of 1, 160 LIS articles have been published in other journals before, but is especially pertinent here. These authors find that only a third of LIS literature makes overt reference to theory, and that both usage and type of theory are correlated with the specific domain of the research (e.g., historical treatments versus user studies versus information retrieval). Lynne McKechnie and Karen Pettigrew identify four general sources of theory: LIS, the Humanities, Social Sciences and Sciences. This approach makes it obvious that the predominant source of theory is the social sciences (45%), followed by LIS (30%), the sciences (19%) and the humanities (5%) - despite a predominance (almost 60%) of articles with science-related content. The authors discuss interdisciplinarity at some length, noting the great many non-LIS authors and theories which appear in the LIS literature, and the tendency for native LIS theories to go uncited outside of the discipline. Two other articles emphasize the ways in which theory has evolved. The more general of three two is Jack Glazier and Robert Grover's update of their classic 1986 Taxonomy of Theory in LIS. This article describes an elaborated version, called the "Circuits of Theory," offering definitions of a hierarchy of terms ranging from "world view" through "paradigm," "grand theory" and (ultimately) "symbols." Glazier & Grover's one-paragraph example of how theory was applied in their study of city managers is much too brief and is at odds with the emphasis an quantitative indicators of literature found in the rest of the volume. The second article about the evolution of theory, Richard Smiraglia's "The progress of theory in knowledge organization," restricts itself to the history of thinking about cataloging and indexing. Smiraglia traces the development of theory from a pragmatic concern with "what works," to a reliance an empirical tests, to an emerging flirtation with historicist approaches to knowledge.
  3. Information visualization in data mining and knowledge discovery (2002) 0.00
    0.0022438637 = product of:
      0.0067315907 = sum of:
        0.0067315907 = product of:
          0.013463181 = sum of:
            0.013463181 = weight(_text_:22 in 1789) [ClassicSimilarity], result of:
              0.013463181 = score(doc=1789,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.07738023 = fieldWeight in 1789, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1789)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    23. 3.2008 19:10:22

Languages

  • e 194
  • d 44
  • m 9
  • i 1
  • More… Less…

Types

  • m 110
  • el 4
  • r 1
  • More… Less…

Subjects

Classifications