Search (233 results, page 1 of 12)

  • × type_ss:"s"
  1. Classification and information control : Papers representing the work of the Classification Research Group during 1960-1968 (1969) 0.03
    0.032706924 = product of:
      0.081767306 = sum of:
        0.02060168 = product of:
          0.04120336 = sum of:
            0.04120336 = weight(_text_:problems in 3402) [ClassicSimilarity], result of:
              0.04120336 = score(doc=3402,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.27361554 = fieldWeight in 3402, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3402)
          0.5 = coord(1/2)
        0.061165623 = product of:
          0.12233125 = sum of:
            0.12233125 = weight(_text_:exercises in 3402) [ClassicSimilarity], result of:
              0.12233125 = score(doc=3402,freq=2.0), product of:
                0.25947425 = queryWeight, product of:
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.036484417 = queryNorm
                0.47145814 = fieldWeight in 3402, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3402)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    Enthält die Beiträge: FAIRTHORNE, R.A.: 'Browsing' schemes and 'specialist' schemes; KYLE, B.R.F.: Lessons learned from experience in drafting the Kyle classification; MILLS, J.: Inadequacies of exing general classification schemes; COATES, E.J.: CRG proposals for a new general classification; TOMLINSON, H.: Notes on initial work for NATO classification; TOMLINSON, H.: Report on work for new general classification scheme; TOMLINSON, H.: Expansion of categories using mining terms; TOMLINSON, H.: Relationship between geology and mining; TOMLINSON, H.: Use of categories for sculpture; TOMLINSON, H.: Expansion of categories using terms from physics; TOMLINSON, H.: The distinction between physical and chemical entities; TOMLINSON, H.: Concepts within politics; TOMLINSON, H.: Problems arising from first GCS papers; AUSTIN, D.: The theory of integrative levels reconsidered as the basis of a general classification; AUSTIN, D.: Demonstration: provisional scheme for naturally occuring entities; AUSTIN, D.: Stages in classing and exercises; AUSTIN, D.: Report to the Library Association Research Committee on the use of the NATO grant
  2. Multimedia content and the Semantic Web : methods, standards, and tools (2005) 0.03
    0.030142525 = product of:
      0.07535631 = sum of:
        0.012139657 = product of:
          0.024279313 = sum of:
            0.024279313 = weight(_text_:problems in 150) [ClassicSimilarity], result of:
              0.024279313 = score(doc=150,freq=4.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.1612295 = fieldWeight in 150, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=150)
          0.5 = coord(1/2)
        0.06321666 = sum of:
          0.041812256 = weight(_text_:etc in 150) [ClassicSimilarity], result of:
            0.041812256 = score(doc=150,freq=4.0), product of:
              0.19761753 = queryWeight, product of:
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.036484417 = queryNorm
              0.2115817 = fieldWeight in 150, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.01953125 = fieldNorm(doc=150)
          0.021404404 = weight(_text_:22 in 150) [ClassicSimilarity], result of:
            0.021404404 = score(doc=150,freq=6.0), product of:
              0.12776221 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.036484417 = queryNorm
              0.16753313 = fieldWeight in 150, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.01953125 = fieldNorm(doc=150)
      0.4 = coord(2/5)
    
    Classification
    006.7 22
    Date
    7. 3.2007 19:30:22
    DDC
    006.7 22
    Footnote
    Semantic web technologies are explained, and ontology representation is emphasized. There is an excellent summary of the fundamental theory behind applying a knowledge-engineering approach to vision problems. This summary represents the concept of the semantic web and multimedia content analysis. A definition of the fuzzy knowledge representation that can be used for realization in multimedia content applications has been provided, with a comprehensive analysis. The second part of the book introduces the multimedia content analysis approaches and applications. In addition, some examples of methods applicable to multimedia content analysis are presented. Multimedia content analysis is a very diverse field and concerns many other research fields at the same time; this creates strong diversity issues, as everything from low-level features (e.g., colors, DCT coefficients, motion vectors, etc.) up to the very high and semantic level (e.g., Object, Events, Tracks, etc.) are involved. The second part includes topics on structure identification (e.g., shot detection for video sequences), and object-based video indexing. These conventional analysis methods are supplemented by results on semantic multimedia analysis, including three detailed chapters on the development and use of knowledge models for automatic multimedia analysis. Starting from object-based indexing and continuing with machine learning, these three chapters are very logically organized. Because of the diversity of this research field, including several chapters of recent research results is not sufficient to cover the state of the art of multimedia. The editors of the book should write an introductory chapter about multimedia content analysis approaches, basic problems, and technical issues and challenges, and try to survey the state of the art of the field and thus introduce the field to the reader.
  3. Languages of the world : cataloguing issues and problems (1993) 0.03
    0.02834487 = product of:
      0.070862174 = sum of:
        0.04120336 = product of:
          0.08240672 = sum of:
            0.08240672 = weight(_text_:problems in 4242) [ClassicSimilarity], result of:
              0.08240672 = score(doc=4242,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.5472311 = fieldWeight in 4242, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4242)
          0.5 = coord(1/2)
        0.02965881 = product of:
          0.05931762 = sum of:
            0.05931762 = weight(_text_:22 in 4242) [ClassicSimilarity], result of:
              0.05931762 = score(doc=4242,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.46428138 = fieldWeight in 4242, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4242)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    15. 6.1996 18:06:22
  4. Handbook of terminology management : Vol.2: Application-oriented terminology management (2001) 0.03
    0.028310552 = product of:
      0.07077638 = sum of:
        0.02060168 = product of:
          0.04120336 = sum of:
            0.04120336 = weight(_text_:problems in 1750) [ClassicSimilarity], result of:
              0.04120336 = score(doc=1750,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.27361554 = fieldWeight in 1750, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1750)
          0.5 = coord(1/2)
        0.050174702 = product of:
          0.100349404 = sum of:
            0.100349404 = weight(_text_:etc in 1750) [ClassicSimilarity], result of:
              0.100349404 = score(doc=1750,freq=4.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.50779605 = fieldWeight in 1750, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1750)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This is the second of two volumes designed to meet the practical needs of terminologists, translators, lexicographers, subject specialists, standardizers and others who have to solve terminological problems in their daily work. It covers a broad range of topics integrated from an international perspective and treats such fundamental issues as: practical methods of terminology management; types and applications of terminology management creation and use of terminological tools; terminological applications in technical writing, translation and information management; natural language processing; language planning and legal, ethical concerns; and terminology training.
    LCSH
    Technology / Terminology / Handbooks, manuals, etc.
    Subject
    Technology / Terminology / Handbooks, manuals, etc.
  5. Paradigms and conceptual systems in knowledge organization : Proceedings of the Eleventh International ISKO Conference, 23-26 February 2010 Rome, Italy (2010) 0.02
    0.022413107 = product of:
      0.056032766 = sum of:
        0.04120336 = product of:
          0.08240672 = sum of:
            0.08240672 = weight(_text_:problems in 773) [ClassicSimilarity], result of:
              0.08240672 = score(doc=773,freq=8.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.5472311 = fieldWeight in 773, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.046875 = fieldNorm(doc=773)
          0.5 = coord(1/2)
        0.014829405 = product of:
          0.02965881 = sum of:
            0.02965881 = weight(_text_:22 in 773) [ClassicSimilarity], result of:
              0.02965881 = score(doc=773,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.23214069 = fieldWeight in 773, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=773)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    Inhalt: Keynote address - Order and KO - Conceptology in KO - Mathematics in KO - Psychology and KO - Science and KO - Problems in KO - KOS general questions - KOS structure and elements, facet analysis - KOS construction - KOS Maintenance, updating and storage - Compatibility, concordance, interoperability between indexing languages - Theory of classing and indexing - Taxonomies in communications engineering - Special KOSs in literature - Special KOSs in cultural sciences - General problems of natural language, derived indexing, tagging - Automatic language processing - Online retrieval systems and technologies - Problems of terminology - Subject-oriented terminology work - General problems of applied classing and indexing, catalogues, guidelines - Classing and indexing of non-book materials (images, archives, museums) - Personas and institutions in KO, cultural warrant - Organizing team - List of contributors
    Date
    22. 2.2013 12:09:34
  6. Content organization in the new millennium : papers contributed on content organization in the new millennium, Bangalore, 2-4- June 2000. (2000) 0.02
    0.019027313 = product of:
      0.047568284 = sum of:
        0.026872277 = product of:
          0.053744555 = sum of:
            0.053744555 = weight(_text_:problems in 641) [ClassicSimilarity], result of:
              0.053744555 = score(doc=641,freq=10.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.35689673 = fieldWeight in 641, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=641)
          0.5 = coord(1/2)
        0.020696009 = product of:
          0.041392017 = sum of:
            0.041392017 = weight(_text_:etc in 641) [ClassicSimilarity], result of:
              0.041392017 = score(doc=641,freq=2.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.20945519 = fieldWeight in 641, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=641)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    Content Organization in the New Millennium is a compilation of papers contributed to the Seminar on 'Content Organization in the New Millennium' (2-4 June 2000). There were nine invited presentations on various aspects of content organization. The rapid developments in and widening range of use of the Internet worldwide is enabling easy access to information and data globally and almost seamlessly. The quantity, range and variety of information - as text, image, graphics, sound, and multimedia - that is accessible is indeed vast. On the other hand, the ease with which almost any data or information can be placed on and disseminated via the Internet is causing problems for information seekers. One of the causes of these problems is the amorphous nature of the information accessed which has only minimal organization. This results in, among other things, retrieving too much information that is Irrelevant to the subject of interest to the user, and, many a time, it is like searching for a needle in a haystack. Recently, information professionals and subject specialists have become concerned with the situation and have experimented with tools, techniques and strategies, and with the use of time-tested classificatory ideas and other knowledge organization tools, such as thesauri, to mitigate the problems. The paper on "Knowledge Management and Content Organization" by L.J. Haravu places the subject of content organization in the broader canvas of knowledge management (KM). Content organization and the tools necessary to aid knowledge discovery, a basic objective of most information seeking activity, is discussed in the paper "Content Organization as an Aid to Knowledge Discovery" by A. Neelameghan. In that paper, the role of statistical, informetric and scientometric techniques are mentioned, but elaborated on by I.K. Ravichandra Rao in his paper "Quantitative Techniques for Content Analysis." HTML forms for web publishing and embedding metadata have been in wide use; but they are being extended or replaced by XML, XSL, etc. for customizing "Data Type Definitions" to enhance retrieval effectiveness. Shalini R. Urs and K.S. Raghavan discuss this aspect of content organization based on the experience of building a database of theses. The variety of factors to be taken into consideration in content organization for Internet-based information services is elaborated by T.B. Rajashekar on the basis of practical experiences at the Indian Institute of Science. S.B. Viswakumar identifies factors that may affect content organization in multimedia databases. Handling the scripts and vocabulary of Indian languages in organizing the contents of databases raises additional problems and issues, and these are being examined in an increasing measure as more and more such databases are being constructed in this country. B.A. Sharada considers some aspects of the problems of preparing databases in Kannada language. The papers by M.A. Gopinath and G. Bhattacharyya deal, respectively, with the training required for and professional aspects of, content organization. Content Organization in the New Millennium is the first publication in the Seminar Series of the Sarada Ranganathan Endowment for Library Science
  7. Smith, L.C.: "Wholly new forms of encyclopedias" : electronic knowledge in the form of hypertext (1989) 0.02
    0.018896578 = product of:
      0.047241446 = sum of:
        0.027468907 = product of:
          0.054937813 = sum of:
            0.054937813 = weight(_text_:problems in 3558) [ClassicSimilarity], result of:
              0.054937813 = score(doc=3558,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.36482072 = fieldWeight in 3558, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3558)
          0.5 = coord(1/2)
        0.019772539 = product of:
          0.039545078 = sum of:
            0.039545078 = weight(_text_:22 in 3558) [ClassicSimilarity], result of:
              0.039545078 = score(doc=3558,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.30952093 = fieldWeight in 3558, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3558)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The history of encyclopedias and wholly new forms of encyclopedias are briefly reviewed. The possibilities and problems that hypertext presents as a basis for new forms of encyclopedias are explored. The capabilities of current systems, both experimental and commercially available, are outlined, focusing on new possibilities for authoring and design and for reading the retrieval. Examples of applications already making use of hypertext are given.
    Date
    7. 1.1996 22:47:52
  8. Sievert, M.E.; McKinin, E.J.: Why full-text misses some relevant documents : an analysis of documents not retrieved by CCML or MEDIS (1989) 0.01
    0.014172435 = product of:
      0.035431087 = sum of:
        0.02060168 = product of:
          0.04120336 = sum of:
            0.04120336 = weight(_text_:problems in 3564) [ClassicSimilarity], result of:
              0.04120336 = score(doc=3564,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.27361554 = fieldWeight in 3564, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3564)
          0.5 = coord(1/2)
        0.014829405 = product of:
          0.02965881 = sum of:
            0.02965881 = weight(_text_:22 in 3564) [ClassicSimilarity], result of:
              0.02965881 = score(doc=3564,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.23214069 = fieldWeight in 3564, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3564)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Searches conducted as part of the MEDLINE/Full-Text Research Project revealed that the full-text data bases of clinical medical journal articles (CCML (Comprehensive Core Medical Library) from BRS Information Technologies, and MEDIS from Mead Data Central) did not retrieve all the relevant citations. An analysis of the data indicated that 204 relevant citations were retrieved only by MEDLINE. A comparison of the strategies used on the full-text data bases with the text of the articles of these 204 citations revealed that 2 reasons contributed to these failure. The searcher often constructed a restrictive strategy which resulted in the loss of relevant documents; and as in other kinds of retrieval, the problems of natural language caused the loss of relevant documents.
    Date
    9. 1.1996 10:22:31
  9. TREC: experiment and evaluation in information retrieval (2005) 0.01
    0.013627884 = product of:
      0.03406971 = sum of:
        0.008584034 = product of:
          0.017168067 = sum of:
            0.017168067 = weight(_text_:problems in 636) [ClassicSimilarity], result of:
              0.017168067 = score(doc=636,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.114006475 = fieldWeight in 636, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=636)
          0.5 = coord(1/2)
        0.025485676 = product of:
          0.05097135 = sum of:
            0.05097135 = weight(_text_:exercises in 636) [ClassicSimilarity], result of:
              0.05097135 = score(doc=636,freq=2.0), product of:
                0.25947425 = queryWeight, product of:
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.036484417 = queryNorm
                0.19644089 = fieldWeight in 636, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=636)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Footnote
    Rez. in: JASIST 58(2007) no.6, S.910-911 (J.L. Vicedo u. J. Gomez): "The Text REtrieval Conference (TREC) is a yearly workshop hosted by the U.S. government's National Institute of Standards and Technology (NIST) that fosters and supports research in information retrieval as well as speeding the transfer of technology between research labs and industry. Since 1992, TREC has provided the infrastructure necessary for large-scale evaluations of different text retrieval methodologies. TREC impact has been very important and its success has been mainly supported by its continuous adaptation to the emerging information retrieval needs. Not in vain, TREC has built evaluation benchmarks for more than 20 different retrieval problems such as Web retrieval, speech retrieval, or question-answering. The large and intense trajectory of annual TREC conferences has resulted in an immense bulk of documents reflecting the different eval uation and research efforts developed. This situation makes it difficult sometimes to observe clearly how research in information retrieval (IR) has evolved over the course of TREC. TREC: Experiment and Evaluation in Information Retrieval succeeds in organizing and condensing all this research into a manageable volume that describes TREC history and summarizes the main lessons learned. The book is organized into three parts. The first part is devoted to the description of TREC's origin and history, the test collections, and the evaluation methodology developed. The second part describes a selection of the major evaluation exercises (tracks), and the third part contains contributions from research groups that had a large and remarkable participation in TREC. Finally, Karen Spark Jones, one of the main promoters of research in IR, closes the book with an epilogue that analyzes the impact of TREC on this research field.
  10. Panzer, M.: Dewey: how to make it work for you (2013) 0.01
    0.011810362 = product of:
      0.029525906 = sum of:
        0.017168067 = product of:
          0.034336135 = sum of:
            0.034336135 = weight(_text_:problems in 5797) [ClassicSimilarity], result of:
              0.034336135 = score(doc=5797,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.22801295 = fieldWeight in 5797, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5797)
          0.5 = coord(1/2)
        0.0123578375 = product of:
          0.024715675 = sum of:
            0.024715675 = weight(_text_:22 in 5797) [ClassicSimilarity], result of:
              0.024715675 = score(doc=5797,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.19345059 = fieldWeight in 5797, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5797)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The article discusses various aspects of the Dewey Decimal Classification (DDC) system of classifying library books in 2013. Background is presented on some librarians' desire to stop using DDC and adopt a genre-based system of classification. It says librarians can use the DDC to deal with problems and issues related to library book classification. It highlights the benefits of using captions and relative index terms and semantic relationships in DDC.
    Source
    Knowledge quest. 42(2013) no.2, S.22-29
  11. Recruiting, educating, and training cataloging librarians : solving the problems (1989) 0.01
    0.010987563 = product of:
      0.054937813 = sum of:
        0.054937813 = product of:
          0.10987563 = sum of:
            0.10987563 = weight(_text_:problems in 1057) [ClassicSimilarity], result of:
              0.10987563 = score(doc=1057,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.72964144 = fieldWeight in 1057, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.125 = fieldNorm(doc=1057)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
  12. ¬The reference library user : problems and solutions (1991) 0.01
    0.010987563 = product of:
      0.054937813 = sum of:
        0.054937813 = product of:
          0.10987563 = sum of:
            0.10987563 = weight(_text_:problems in 259) [ClassicSimilarity], result of:
              0.10987563 = score(doc=259,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.72964144 = fieldWeight in 259, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.125 = fieldNorm(doc=259)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
  13. Current theory in library and information science (2002) 0.01
    0.010574651 = product of:
      0.026436627 = sum of:
        0.009711726 = product of:
          0.019423451 = sum of:
            0.019423451 = weight(_text_:problems in 822) [ClassicSimilarity], result of:
              0.019423451 = score(doc=822,freq=4.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.1289836 = fieldWeight in 822, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.015625 = fieldNorm(doc=822)
          0.5 = coord(1/2)
        0.016724901 = product of:
          0.033449803 = sum of:
            0.033449803 = weight(_text_:etc in 822) [ClassicSimilarity], result of:
              0.033449803 = score(doc=822,freq=4.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.16926536 = fieldWeight in 822, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.015625 = fieldNorm(doc=822)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Footnote
    Rez. in JASIST 54(2003) no.4, S.358-359 (D.O. Case): "Having recently written a chapter an theories applied in information-seeking research (Case, 2002), I was eager to read this issue of Library Trends devoted to "Current Theory." Once in hand I found the individual articles in the issue to be of widely varying quality, and the scope to be disappointingly narrow. A more accurate title might be "Some Articles about Theory, with Even More an Bibliometrics." Eight of the thirteen articles (not counting the Editor's brief introduction) are about quantifying the growth, quality and/or authorship of literature (mostly in the sciences, with one example from the humanities). Social and psychological theories are hardly mentioned-even though one of the articles claims that nearly half of all theory invoked in LIS emanates from the social sciences. The editor, SUNY Professor Emeritus William E. McGrath, claims that the first six articles are about theory, while the rest are original research that applies theory to some problem-a characterization that I find odd. Reading his Introduction provides some clues to the curious composition of this issue. McGrath states that only in "physics and other exact sciences" are definitions of theory "well understood" (p. 309)-a view I think most psychologists and sociologists would content-and restricts his own definition of theory to "an explanation for a quantifiable phenomenon" (p. 310). In his own chapter in the issue, "Explanation and Prediction," McGrath makes it clear that he holds out hope for a "unified theory of librarianship" that would resemble those regarding "fundamental forces in physics and astronomy." However, isn't it wishful thinking to hope for a physics-like theory to emerge from particular practices (e.g., citation) and settings (e.g., libraries) when broad generalizations do not easily accrue from observation of more basic human behaviors? Perhaps this is where the emphasis an documents, rather than people, entered into the choice of material for "Current Theory." Artifacts of human behavior, such as documents, are more amenable to prediction in ways that allow for the development of theorywitness Zipf's Principle of Least Effort, the Bradford Distribution, Lotka's Law, etc. I imagine that McGrath would say that "librarianship," at least, is more about materials than people. McGrath's own contribution to this issue emphasizes measures of libraries, books and journals. By citing exemplar studies, he makes it clear that much has been done to advance measurement of library operations, and he eloquently argues for an overarching view of the various library functions and their measures. But, we have all heard similar arguments before; other disciplines, in earlier times, have made the argument that a solid foundation of empirical observation had been laid down, which would lead inevitably to a grand theory of "X." McGrath admits that "some may say the vision [of a unified theory] is naive" (p. 367), but concludes that "It remains for researchers to tie the various level together more formally . . . in constructing a comprehensive unified theory of librarianship."
    There is only one article in the issue that claims to offer a theory of the scope that discussed by McGrath, and I am sorry that it appears in this issue. Bor-Sheng Tsai's "Theory of Information Genetics" is an almost incomprehensible combination of four different "models" wich names like "Möbius Twist" and "Clipping-Jointing." Tsai starts by posing the question "What is it that makes the `UNIVERSAL' information generating, representation, and transfer happen?" From this ungrammatical beginning, things get rapidly worse. Tsai makes side trips into the history of defining information, offers three-dimensional plots of citation data, a formula for "bonding relationships," hypothetical data an food consumption, sample pages from a web-based "experts directory" and dozens of citations from works which are peripheral to the discussion. The various sections of the article seem to have little to do with one another. I can't believe that the University of Illinois would publish something so poorly-edited. Now I will turn to the dominant, "bibliometric" articles in this issue, in order of their appearance: Judit Bar-Ilan and Bluma Peritz write about "Informetric Theories and Methods for Exploring the Internet." Theirs is a survey of research an patterns of electronic publication, including different ways of sampling, collecting and analyzing data an the Web. Their contribution to the "theory" theme lies in noting that some existing bibliometric laws apply to the Web. William Hood and Concepción Wilson's article, "Solving Problems ... Using Fuzzy Set Theory," demonstrates the widespread applicability of this mathematical tool for library-related problems, such as making decisions about the binding of documents, or improving document retrieval. Ronald Rosseau's piece an "Journal Evaluation" discusses the strength and weaknesses of various indicators for determining impact factors and rankings for journals. His is an exceptionally well-written article that has everything to do with measurement but almost nothing to do with theory, to my way of thinking. "The Matthew Effect for Countries" is the topic of Manfred Bonitz's paper an citations to scientific publications, analyzed by nation of origin. His research indicates that publications from certain countries-such as Switzerland, Denmark, the USA and the UK-receive more than the expected number of citations; correspondingly, some rather large countries like China receive much fewer than might be expected. Bonitz provides an extensive discussion of how the "MEC" measure came about, and what it ments-relating it to efficiency in scientific research. A bonus is his detour into the origins of the Matthew Effect in the Bible, and the subsequent popularization of the name by the sociologist Robert Merton. Wolfgang Glänzel's "Coauthorship patterns and trends in the sciences (1980-1998)" is, as the title implies, another citation analysis. He compares the number of authors an papers in three fields-Biomedical research, Chemistry and Mathematics - at sixyear intervals. Among other conclusions, Glänzel notes that the percentage of publications with four or more authors has been growing in all three fields, and that multiauthored papers are more likely to be cited.
    Coauthorship is also the topic in Hildrun Kretschmer's article an the origins and uses of "Gestalt Theory." The explanation of the theory is fascinating but the application of it, involving threedimensional graphics depicting coauthorship in physics and medicine, seems somewhat distant from Gestalt Theory and the importance of the results is hard to appreciate. Henk Moed, Marc Luwel, and A.J. Nederhof apply bibliometrics to the evaluation of research performance in the humanities, specifically, Flemish professors of law. Their attempts to classify and measure research output appear rather specific to the population they studied, with little contribution to a more general bibliometric theory. The final contribution is by Peter Vinkler. He offers a comprehensive model of the growth and institutionalization of scientific information. Since it could be viewed as an overview of the concerns of scientometrics, Vinkler's article might best be read before some of the others described above. To conclude, this issue of Library Trends has a schizophrenic quality about it. "Theory" is defined broadly in those initial articles "about" theory (especially in those by McKechnie and Pettigrew, and by Glazier and Grover), but most of the remainder of the pieces consider theory narrowly in the context of bibliometric analysis. This is unfortunate an two counts. First, while bibliometric investigations have uncovered fascinating and useful statistical regularities in the growth, authorship and citation of literature, they are often short an the sort of explanation that we would expect from a well-developed theory. That is, why do the statistical distributions (of publications, citations, etc.) appear as they do? Second, information science studies people at least as much as it does documents. Appropriately, then, most of our theory comes from the social sciences (as the McKechnie and Pettigrew article convincingly demonstrates). However, this source of theory is virtually ignored in the issue of Library Trends an "current theory." What a shame."
  14. Medien-Informationsmanagement : Archivarische, dokumentarische, betriebswirtschaftliche, rechtliche und Berufsbild-Aspekte ; [Frühjahrstagung der Fachgruppe 7 im Jahr 2000 in Weimar und Folgetagung 2001 in Köln] (2003) 0.01
    0.010061655 = product of:
      0.050308276 = sum of:
        0.050308276 = sum of:
          0.03547887 = weight(_text_:etc in 1833) [ClassicSimilarity], result of:
            0.03547887 = score(doc=1833,freq=2.0), product of:
              0.19761753 = queryWeight, product of:
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.036484417 = queryNorm
              0.17953302 = fieldWeight in 1833, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1833)
          0.014829405 = weight(_text_:22 in 1833) [ClassicSimilarity], result of:
            0.014829405 = score(doc=1833,freq=2.0), product of:
              0.12776221 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.036484417 = queryNorm
              0.116070345 = fieldWeight in 1833, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1833)
      0.2 = coord(1/5)
    
    Abstract
    Als in den siebziger Jahren des vergangenen Jahrhunderts immer häufiger die Bezeichnung Informationsmanager für Leute propagiert wurde, die bis dahin als Dokumentare firmierten, wurde dies in den etablierten Kreisen der Archivare und Bibliothekare gelegentlich belächelt und als Zeichen einer Identitätskrise oder jedenfalls einer Verunsicherung des damit überschriebenen Berufsbilds gewertet. Für den Berufsstand der Medienarchivare/Mediendokumentare, die sich seit 1960 in der Fachgruppe 7 des Vereins, später Verbands deutscher Archivare (VdA) organisieren, gehörte diese Verortung im Zeichen neuer inhaltlicher Herausforderungen (Informationsflut) und Technologien (EDV) allerdings schon früh zu den Selbstverständlichkeiten des Berufsalltags. "Halt, ohne uns geht es nicht!" lautete die Überschrift eines Artikels im Verbandsorgan "Info 7", der sich mit der Einrichtung von immer mächtigeren Leitungsnetzen und immer schnelleren Datenautobahnen beschäftigte. Information, Informationsgesellschaft: diese Begriffe wurden damals fast nur im technischen Sinne verstanden. Die informatisierte, nicht die informierte Gesellschaft stand im Vordergrund - was wiederum Kritiker auf den Plan rief, von Joseph Weizenbaum in den USA bis hin zu den Informations-Ökologen in Bremen. Bei den nationalen, manchmal auch nur regionalen Projekten und Modellversuchen mit Datenautobahnen - auch beim frühen Btx - war nie so recht deutlich geworden, welche Inhalte in welcher Gestalt durch diese Netze und Straßen gejagt werden sollten und wer diese Inhalte eigentlich selektieren, portionieren, positionieren, kurz: managen sollte. Spätestens mit dem World Wide Web sind diese Projekte denn auch obsolet geworden, jedenfalls was die Hardware und Software anging. Geblieben ist das Thema Inhalte (neudeutsch: Content). Und - immer drängender im nicht nur technischen Verständnis - das Thema Informationsmanagement. MedienInformationsManagement war die Frühjahrstagung der Fachgruppe 7 im Jahr 2000 in Weimar überschrieben, und auch die Folgetagung 2001 in Köln, die der multimedialen Produktion einen dokumentarischen Pragmatismus gegenüber stellte, handelte vom Geschäftsfeld Content und von Content-Management-Systemen. Die in diesem 6. Band der Reihe Beiträge zur Mediendokumentation versammelten Vorträge und Diskussionsbeiträge auf diesen beiden Tagungen beleuchten das Titel-Thema aus den verschiedensten Blickwinkeln: archivarischen, dokumentarischen, kaufmännischen, berufsständischen und juristischen. Deutlich wird dabei, daß die Berufsbezeichnung Medienarchivarln/Mediendokumentarln ziemlich genau für all das steht, was heute mit sog. alten wie neuen Medien im organisatorischen, d.h. ordnenden und vermittelnden Sinne geschieht. Im besonderen Maße trifft dies auf das Internet und die aus ihm geborenen Intranets zu. Beide bedürfen genauso der ordnenden Hand, die sich an den alten Medien, an Buch, Zeitung, Tonträger, Film etc. geschult hat, denn sie leben zu großen Teilen davon. Daß das Internet gleichwohl ein Medium sui generis ist und die alten Informationsberufe vor ganz neue Herausforderungen stellt - auch das durchzieht die Beiträge von Weimar und Köln.
    Date
    11. 5.2008 19:49:22
  15. Visual interfaces to digital libraries : [extended papers presented at the first and second International Workshops on Visual Interfaces to Digital Libraries, held at the Joint Conference on Digital Libraries (JCDL) in 2001 and 2002] (2002) 0.01
    0.010021425 = product of:
      0.02505356 = sum of:
        0.0145675875 = product of:
          0.029135175 = sum of:
            0.029135175 = weight(_text_:problems in 1784) [ClassicSimilarity], result of:
              0.029135175 = score(doc=1784,freq=4.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.1934754 = fieldWeight in 1784, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1784)
          0.5 = coord(1/2)
        0.010485973 = product of:
          0.020971946 = sum of:
            0.020971946 = weight(_text_:22 in 1784) [ClassicSimilarity], result of:
              0.020971946 = score(doc=1784,freq=4.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.16414827 = fieldWeight in 1784, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1784)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Visual Interfaces to Digital Libraries exploit the power of human vision and spatial cognition to help individuals mentally organize and electronically access and manage large and complex information spaces. They draw on progress in the field of information visualization and seek to shift the users' mental load from slow reading to faster perceptual processes such as visual pattern recognition.Based on two workshops, the book presents an introductory overview as well as a closing listing of the top ten problems in the area by the volume editors. Also included are 16 thoroughly reviewed and revised full papers organized in topical sections on visual interfaces to documents, document parts, document variants, and document usage data; visual interfaces to image and video documents; visualization of knowledge domains; cartographic interfaces to digital libraries; and a general framework.
    Content
    Enthält die Beiträge: Katy Börner and Chaomei Chen: Visual Interfaces to Digital Libraries: Motivation, Utilization, and Socio-technical Challenges - Part I. Visual interfaces to Documents, Document Parts, Document Variants, and Document Usage Data - George Buchanan, Ann Blandford, Matt Jones, and Harold Thimbleby: Spatial Hypertext as a Reader Tool in Digital Libraries; Michael Christoffel and Bethina Schmitt: Accessing Libraries as Easy as a Game; Carlos Monroy, Rajiv Kochumman, Richard Furuta, and Eduardo Urbina: Interactive Timeline Viewer (ItLv): A Tool to Visualize Variants Among Documents; Mischa Weiss-Lijn, Janet T. McDonnell, and Leslie James: An Empirical Evaluation of the Interactive Visualization of Metadata to Support Document Use; Stephen G. Eick: Visual Analysis of Website Browsing Patterns - Part II. Visual Interfaces to Image and Video Documents - Adrian Graham, Hector Garcia-Molina, Andreas Paepcke, and Terry Winograd: Extreme Temporal Photo Browsing; Michael G. Christel: Accessing News Video Libraries through Dynamic Information Extraction, Summarization, and Visualization; Anselm Spoerri: Handwritten Notes as a Visual Interface to Index, Edit and Publish Audio/Video Highlights - Part III. Visualization of Knowledge Domains - Jan W. Buzydlowski, Howard D. White, and Xia Lin: Term Co-occurrence Analysis as an Interface for Digital Libraries; Kevin W. Boyack, Brian N. Wylie, and George S. Davidson: Information Visualization, Human-Computer Interaction, and Cognitive Psychology: Domain Visualizations - Part IV. Cartographic Interfaces to Digital Libraries - André Skupin: On Geometry and Transformation in Map-Like Information Visualization; Guoray Cai: GeoVIBE: A Visual Interface for Geographic Digital Libraries: Teong Joo Ong, John J. Leggett, Hugh D. Wilson, Stephan L. Hatch, and Monique D. Reed: Interactive Information Visualization in the Digital Flora of Texas; Dan Ancona, Mike Freeston, Terry Smith, and Sara Fabrikant: Visual Explorations for the Alexandria Digital Earth Prototype - Part V. Towards a General Framework - Rao Shen, Jun Wang, and Edward A. Fox: A Lightweight Protocol between Digital Libraries and Visualization Systems; Chaomei Chen and Katy Börner: Top Ten Problems in Visual Interfaces to Digital Libraries
    Date
    22. 2.2003 17:25:39
    22. 3.2008 15:02:37
  16. Serial cataloguing : modern perspectives and international developments (1992) 0.01
    0.00988627 = product of:
      0.04943135 = sum of:
        0.04943135 = product of:
          0.0988627 = sum of:
            0.0988627 = weight(_text_:22 in 3704) [ClassicSimilarity], result of:
              0.0988627 = score(doc=3704,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.77380234 = fieldWeight in 3704, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.15625 = fieldNorm(doc=3704)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Serials librarian. 22(1992), nos.3/4
  17. Advances in librarianship (1998) 0.01
    0.0097869085 = product of:
      0.04893454 = sum of:
        0.04893454 = product of:
          0.09786908 = sum of:
            0.09786908 = weight(_text_:22 in 4698) [ClassicSimilarity], result of:
              0.09786908 = score(doc=4698,freq=4.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.76602525 = fieldWeight in 4698, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4698)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Issue
    Vol.22.
    Signature
    78 BAHH 1089-22
  18. Metadata and semantics research : 7th Research Conference, MTSR 2013 Thessaloniki, Greece, November 19-22, 2013. Proceedings (2013) 0.01
    0.009700513 = product of:
      0.024251282 = sum of:
        0.012017647 = product of:
          0.024035294 = sum of:
            0.024035294 = weight(_text_:problems in 1155) [ClassicSimilarity], result of:
              0.024035294 = score(doc=1155,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.15960906 = fieldWeight in 1155, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1155)
          0.5 = coord(1/2)
        0.012233635 = product of:
          0.02446727 = sum of:
            0.02446727 = weight(_text_:22 in 1155) [ClassicSimilarity], result of:
              0.02446727 = score(doc=1155,freq=4.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.19150631 = fieldWeight in 1155, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1155)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The MTSR 2013 program and the contents of these proceedings show a rich diversity of research and practices, drawing on problems from metadata and semantically focused tools and technologies, linked data, cross-language semantics, ontologies, metadata models, and semantic system and metadata standards. The general session of the conference included 18 papers covering a broad spectrum of topics, proving the interdisciplinary field of metadata, and was divided into three main themes: platforms for research data sets, system architecture and data management; metadata and ontology validation, evaluation, mapping and interoperability; and content management. Metadata as a research topic is maturing, and the conference also supported the following five tracks: Metadata and Semantics for Open Repositories, Research Information Systems and Data Infrastructures; Metadata and Semantics for Cultural Collections and Applications; Metadata and Semantics for Agriculture, Food and Environment; Big Data and Digital Libraries in Health, Science and Technology; and European and National Projects, and Project Networking. Each track had a rich selection of papers, giving broader diversity to MTSR, and enabling deeper exploration of significant topics.
    Date
    17.12.2013 12:51:22
  19. XML data management : native XML and XML-enabled database systems (2003) 0.01
    0.009488272 = product of:
      0.023720678 = sum of:
        0.011894386 = product of:
          0.023788773 = sum of:
            0.023788773 = weight(_text_:problems in 2073) [ClassicSimilarity], result of:
              0.023788773 = score(doc=2073,freq=6.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.15797201 = fieldWeight in 2073, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2073)
          0.5 = coord(1/2)
        0.011826291 = product of:
          0.023652581 = sum of:
            0.023652581 = weight(_text_:etc in 2073) [ClassicSimilarity], result of:
              0.023652581 = score(doc=2073,freq=2.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.11968868 = fieldWeight in 2073, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2073)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Footnote
    Relational database Management systems have been one of the great success stories of recent times and sensitive to the market, Most major vendors have responded by extending their products to handle XML data while still exploiting the range of facilities that a modern RDBMS affords. No book of this type would be complete without consideration of the "big these" (Oracle 9i, DB2, and SQL Server 2000 which each get a dedicated chapter) and though occasionally overtly piece-meal and descriptive the authors all note the shortcomings as well as the strengths of the respective systems. This part of the book is somewhat dichotomous, these chapters being followed by two that propose detailed solutions to somewhat theoretical problems, a generic architecture for storing XML in a RDBMS and using an object-relational approach to building an XML repository. The biography of the author of the latter (Paul Brown) contains the curious but strangely reassuring admission that "he remains puzzled by XML." The first five components are in-depth case studies of XMLdatabase applications. Necessarily diverse, few will be interested in all the topics presented but I was particularly interested in the first case study an bioinformatics. One of the twentieth century's greatest scientific undertakings was the Human Genome Project, the quest to list the information encoded by the sequence of DNA that makes up our genes and which has been referred to as "a paradigm for information Management in the life sciences" (Pearson & Soll, 1991). After a brief introduction to molecular biology to give the background to the information management problems, the authors turn to the use of XML in bioinformatics. Some of the data are hierarchical (e.g., the Linnaean classification of a human as a primate, primates as mammals, mammals are all vertebrates, etc.) but others are far more difficult to model. The Human Genome Project is virtually complete as far as the data acquisition phase is concerned and the immense volume of genome sequence data is no longer a very significant information Management issue per se. However bioinformaticians now need to interpret this information. Some data are relatively straightforward, e.g., the positioning of genes and sequence elements (e.g., promoters) within the sequences, but there is often little or no knowledge available an the direct and indirect interactions between them. There are vast numbers of such interrelationships; many complex data types and novel ones are constantly emerging, necessitating an extensible approach and the ability to manage semi-structured data. In the past, object databases such as AceDB (Durbin & Mieg, 1991) have gone some way to Meeting these aims but it is the combination of XML and databases that more completely addresses knowledge Management requirements of bioinformatics. XML is being enthusiastically adopted with a plethora of XML markup standards being developed, as authors Direen and Jones note "The unprecedented degree and flexibility of XML in terms of its ability to capture information is what makes it ideal for knowledge Management and for use in bioinformatics."
    After several detailed examples of XML, Direen and Jones discuss sequence comparisons. The ability to create scored comparisons by such techniques as sequence alignment is fundamental to bioinformatics. For example, the function of a gene product may be inferred from similarity with a gene of known function but originating from a different organism and any information modeling method must facilitate such comparisons. One such comparison tool, BLAST utilizes a heuristic method has become the tool of choice for many years and is integrated into the NeoCore XMS (XML Management System) described herein. Any set of sequences that can be identified using an XPath query may thus become the targets of an embedded search. Again examples are given, though a BLASTp (protein) search is labeled as being BLASTn (nucleotide sequence) in one of them. Some variants of BLAST are computationally intensive, e.g., tBLASTx where a nucleotide sequence is dynamically translated in all six reading frames and compared against similarly translated database sequences. Though these variants are implemented in NeoCore XMS, it would be interesting to see runtimes for such comparisons. Obviously the utility of this and the other four quite specific examples will depend an your interest in the application area but two that are more research-oriented and general follow them. These chapters (on using XML with inductive databases and an XML warehouses) are both readable critical reviews of their respective subject areas. For those involved in the implementation of performance-critical applications an examination of benchmark results is mandatory, however very few would examine the benchmark tests themselves. The picture that emerges from this section is that no single set is comprehensive and that some functionalities are not addressed by any available benchmark. As always, there is no Substitute for an intimate knowledge of your data and how it is used. In a direct comparison of an XML-enabled and a native XML database system (unfortunately neither is named), the authors conclude that though the native system has the edge in handling large documents this comes at the expense of increasing index and data file size. The need to use legacy data and software will certainly favor the all-pervasive XML-enabled RDBMS such as Oracle 9i and IBM's DB2. Of more general utility is the chapter by Schmauch and Fellhauer comparing the approaches used by database systems for the storing of XML documents. Many of the limitations of current XML-handling systems may be traced to problems caused by the semi-structured nature of the documents and while the authors have no panacea, the chapter forms a useful discussion of the issues and even raises the ugly prospect that a return to the drawing board may be unavoidable. The book concludes with an appraisal of the current status of XML by the editors that perhaps focuses a little too little an the database side but overall I believe this book to be very useful indeed. Some of the indexing is a little idiosyncratic, for example some tags used in the examples are indexed (perhaps a separate examples index would be better) and Ron Bourret's excellent web site might be better placed under "Bourret" rather than under "Ron" but this doesn't really detract from the book's qualities. The broad spectrum and careful balance of theory and practice is a combination that both database and XML professionals will find valuable."
  20. Computing with words in information / intelligent systems 2 : Applications (1999) 0.01
    0.009461033 = product of:
      0.047305163 = sum of:
        0.047305163 = product of:
          0.094610326 = sum of:
            0.094610326 = weight(_text_:etc in 3927) [ClassicSimilarity], result of:
              0.094610326 = score(doc=3927,freq=2.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.47875473 = fieldWeight in 3927, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3927)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    In part 2, applications in a wider array of fields are presented which use the paradigm of computing with words, exemplified by reasoning, data analysis, data mining, machine learning, risk analyses, reliability and quality control, decision making, optimization and control, databases, medical diagnosis, business analyses, traffic management, power system planning, military applications, etc.

Languages

  • e 175
  • d 50
  • m 8
  • i 1
  • More… Less…

Types

  • m 117
  • el 5
  • r 1
  • More… Less…

Subjects

Classifications