Search (53 results, page 1 of 3)

  • × type_ss:"s"
  • × year_i:[2000 TO 2010}
  1. Multimedia content and the Semantic Web : methods, standards, and tools (2005) 0.04
    0.04234663 = product of:
      0.08469326 = sum of:
        0.08469326 = sum of:
          0.058385678 = weight(_text_:e.g in 150) [ClassicSimilarity], result of:
            0.058385678 = score(doc=150,freq=6.0), product of:
              0.23393378 = queryWeight, product of:
                5.2168427 = idf(docFreq=651, maxDocs=44218)
                0.044842023 = queryNorm
              0.24958208 = fieldWeight in 150, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                5.2168427 = idf(docFreq=651, maxDocs=44218)
                0.01953125 = fieldNorm(doc=150)
          0.026307581 = weight(_text_:22 in 150) [ClassicSimilarity], result of:
            0.026307581 = score(doc=150,freq=6.0), product of:
              0.15702912 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.044842023 = queryNorm
              0.16753313 = fieldWeight in 150, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.01953125 = fieldNorm(doc=150)
      0.5 = coord(1/2)
    
    Classification
    006.7 22
    Date
    7. 3.2007 19:30:22
    DDC
    006.7 22
    Footnote
    Semantic web technologies are explained, and ontology representation is emphasized. There is an excellent summary of the fundamental theory behind applying a knowledge-engineering approach to vision problems. This summary represents the concept of the semantic web and multimedia content analysis. A definition of the fuzzy knowledge representation that can be used for realization in multimedia content applications has been provided, with a comprehensive analysis. The second part of the book introduces the multimedia content analysis approaches and applications. In addition, some examples of methods applicable to multimedia content analysis are presented. Multimedia content analysis is a very diverse field and concerns many other research fields at the same time; this creates strong diversity issues, as everything from low-level features (e.g., colors, DCT coefficients, motion vectors, etc.) up to the very high and semantic level (e.g., Object, Events, Tracks, etc.) are involved. The second part includes topics on structure identification (e.g., shot detection for video sequences), and object-based video indexing. These conventional analysis methods are supplemented by results on semantic multimedia analysis, including three detailed chapters on the development and use of knowledge models for automatic multimedia analysis. Starting from object-based indexing and continuing with machine learning, these three chapters are very logically organized. Because of the diversity of this research field, including several chapters of recent research results is not sufficient to cover the state of the art of multimedia. The editors of the book should write an introductory chapter about multimedia content analysis approaches, basic problems, and technical issues and challenges, and try to survey the state of the art of the field and thus introduce the field to the reader.
  2. MARC and metadata : METS, MODS, and MARCXML: current and future implications (2004) 0.02
    0.0243019 = product of:
      0.0486038 = sum of:
        0.0486038 = product of:
          0.0972076 = sum of:
            0.0972076 = weight(_text_:22 in 2840) [ClassicSimilarity], result of:
              0.0972076 = score(doc=2840,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.61904186 = fieldWeight in 2840, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=2840)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Library hi tech. 22(2004) no.1
  3. MARC and metadata : METS, MODS, and MARCXML: current and future implications (2004) 0.02
    0.021264162 = product of:
      0.042528324 = sum of:
        0.042528324 = product of:
          0.08505665 = sum of:
            0.08505665 = weight(_text_:22 in 7196) [ClassicSimilarity], result of:
              0.08505665 = score(doc=7196,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.5416616 = fieldWeight in 7196, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=7196)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Library hi tech. 22(2004) no.1
  4. International yearbook of library and information management : 2001/2002 information services in an electronic environment (2001) 0.02
    0.021264162 = product of:
      0.042528324 = sum of:
        0.042528324 = product of:
          0.08505665 = sum of:
            0.08505665 = weight(_text_:22 in 1381) [ClassicSimilarity], result of:
              0.08505665 = score(doc=1381,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.5416616 = fieldWeight in 1381, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1381)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    25. 3.2003 13:22:23
  5. Subject gateways (2000) 0.02
    0.021264162 = product of:
      0.042528324 = sum of:
        0.042528324 = product of:
          0.08505665 = sum of:
            0.08505665 = weight(_text_:22 in 6483) [ClassicSimilarity], result of:
              0.08505665 = score(doc=6483,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.5416616 = fieldWeight in 6483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6483)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2002 19:43:01
  6. MARC and metadata : METS, MODS, and MARCXML: current and future implications part 2 (2004) 0.02
    0.021264162 = product of:
      0.042528324 = sum of:
        0.042528324 = product of:
          0.08505665 = sum of:
            0.08505665 = weight(_text_:22 in 2841) [ClassicSimilarity], result of:
              0.08505665 = score(doc=2841,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.5416616 = fieldWeight in 2841, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2841)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Library hi tech. 22(2004) no.2
  7. Wege zum Wissen - die menschengerechte Information : 22. Oberhofer Kolloquium 2002, Gotha, 26. bis 28. September 2002. Proceedings (2002) 0.02
    0.018226424 = product of:
      0.03645285 = sum of:
        0.03645285 = product of:
          0.0729057 = sum of:
            0.0729057 = weight(_text_:22 in 7916) [ClassicSimilarity], result of:
              0.0729057 = score(doc=7916,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.46428138 = fieldWeight in 7916, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7916)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  8. Wissen in Aktion : Wege des Knowledge Managements, 22. Online-Tagung der DGI 2000 / Frankfurt am Main, 2. bis 4. Mai 2000: Proceedings (2000) 0.02
    0.018226424 = product of:
      0.03645285 = sum of:
        0.03645285 = product of:
          0.0729057 = sum of:
            0.0729057 = weight(_text_:22 in 1025) [ClassicSimilarity], result of:
              0.0729057 = score(doc=1025,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.46428138 = fieldWeight in 1025, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1025)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  9. Between data science and applied data analysis : Proceedings of the 26th Annual Conference of the Gesellschaft für Klassifikation e.V., University of Mannheim, July 22-24, 2002 (2003) 0.02
    0.018226424 = product of:
      0.03645285 = sum of:
        0.03645285 = product of:
          0.0729057 = sum of:
            0.0729057 = weight(_text_:22 in 4606) [ClassicSimilarity], result of:
              0.0729057 = score(doc=4606,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.46428138 = fieldWeight in 4606, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4606)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  10. Creating Web-accessible databases : case studies for libraries, museums, and other nonprofits (2001) 0.02
    0.0151886875 = product of:
      0.030377375 = sum of:
        0.030377375 = product of:
          0.06075475 = sum of:
            0.06075475 = weight(_text_:22 in 4806) [ClassicSimilarity], result of:
              0.06075475 = score(doc=4806,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.38690117 = fieldWeight in 4806, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4806)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2008 12:21:28
  11. Seminario FRBR : Functional Requirements for Bibliographic Records: reguisiti funzionali per record bibliografici, Florence, 27-28 January 2000, Proceedings (2000) 0.02
    0.0151886875 = product of:
      0.030377375 = sum of:
        0.030377375 = product of:
          0.06075475 = sum of:
            0.06075475 = weight(_text_:22 in 3948) [ClassicSimilarity], result of:
              0.06075475 = score(doc=3948,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.38690117 = fieldWeight in 3948, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3948)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    29. 8.2005 12:54:22
  12. XML data management : native XML and XML-enabled database systems (2003) 0.01
    0.013483594 = product of:
      0.026967188 = sum of:
        0.026967188 = product of:
          0.053934377 = sum of:
            0.053934377 = weight(_text_:e.g in 2073) [ClassicSimilarity], result of:
              0.053934377 = score(doc=2073,freq=8.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.23055404 = fieldWeight in 2073, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2073)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Relational database Management systems have been one of the great success stories of recent times and sensitive to the market, Most major vendors have responded by extending their products to handle XML data while still exploiting the range of facilities that a modern RDBMS affords. No book of this type would be complete without consideration of the "big these" (Oracle 9i, DB2, and SQL Server 2000 which each get a dedicated chapter) and though occasionally overtly piece-meal and descriptive the authors all note the shortcomings as well as the strengths of the respective systems. This part of the book is somewhat dichotomous, these chapters being followed by two that propose detailed solutions to somewhat theoretical problems, a generic architecture for storing XML in a RDBMS and using an object-relational approach to building an XML repository. The biography of the author of the latter (Paul Brown) contains the curious but strangely reassuring admission that "he remains puzzled by XML." The first five components are in-depth case studies of XMLdatabase applications. Necessarily diverse, few will be interested in all the topics presented but I was particularly interested in the first case study an bioinformatics. One of the twentieth century's greatest scientific undertakings was the Human Genome Project, the quest to list the information encoded by the sequence of DNA that makes up our genes and which has been referred to as "a paradigm for information Management in the life sciences" (Pearson & Soll, 1991). After a brief introduction to molecular biology to give the background to the information management problems, the authors turn to the use of XML in bioinformatics. Some of the data are hierarchical (e.g., the Linnaean classification of a human as a primate, primates as mammals, mammals are all vertebrates, etc.) but others are far more difficult to model. The Human Genome Project is virtually complete as far as the data acquisition phase is concerned and the immense volume of genome sequence data is no longer a very significant information Management issue per se. However bioinformaticians now need to interpret this information. Some data are relatively straightforward, e.g., the positioning of genes and sequence elements (e.g., promoters) within the sequences, but there is often little or no knowledge available an the direct and indirect interactions between them. There are vast numbers of such interrelationships; many complex data types and novel ones are constantly emerging, necessitating an extensible approach and the ability to manage semi-structured data. In the past, object databases such as AceDB (Durbin & Mieg, 1991) have gone some way to Meeting these aims but it is the combination of XML and databases that more completely addresses knowledge Management requirements of bioinformatics. XML is being enthusiastically adopted with a plethora of XML markup standards being developed, as authors Direen and Jones note "The unprecedented degree and flexibility of XML in terms of its ability to capture information is what makes it ideal for knowledge Management and for use in bioinformatics."
    After several detailed examples of XML, Direen and Jones discuss sequence comparisons. The ability to create scored comparisons by such techniques as sequence alignment is fundamental to bioinformatics. For example, the function of a gene product may be inferred from similarity with a gene of known function but originating from a different organism and any information modeling method must facilitate such comparisons. One such comparison tool, BLAST utilizes a heuristic method has become the tool of choice for many years and is integrated into the NeoCore XMS (XML Management System) described herein. Any set of sequences that can be identified using an XPath query may thus become the targets of an embedded search. Again examples are given, though a BLASTp (protein) search is labeled as being BLASTn (nucleotide sequence) in one of them. Some variants of BLAST are computationally intensive, e.g., tBLASTx where a nucleotide sequence is dynamically translated in all six reading frames and compared against similarly translated database sequences. Though these variants are implemented in NeoCore XMS, it would be interesting to see runtimes for such comparisons. Obviously the utility of this and the other four quite specific examples will depend an your interest in the application area but two that are more research-oriented and general follow them. These chapters (on using XML with inductive databases and an XML warehouses) are both readable critical reviews of their respective subject areas. For those involved in the implementation of performance-critical applications an examination of benchmark results is mandatory, however very few would examine the benchmark tests themselves. The picture that emerges from this section is that no single set is comprehensive and that some functionalities are not addressed by any available benchmark. As always, there is no Substitute for an intimate knowledge of your data and how it is used. In a direct comparison of an XML-enabled and a native XML database system (unfortunately neither is named), the authors conclude that though the native system has the edge in handling large documents this comes at the expense of increasing index and data file size. The need to use legacy data and software will certainly favor the all-pervasive XML-enabled RDBMS such as Oracle 9i and IBM's DB2. Of more general utility is the chapter by Schmauch and Fellhauer comparing the approaches used by database systems for the storing of XML documents. Many of the limitations of current XML-handling systems may be traced to problems caused by the semi-structured nature of the documents and while the authors have no panacea, the chapter forms a useful discussion of the issues and even raises the ugly prospect that a return to the drawing board may be unavoidable. The book concludes with an appraisal of the current status of XML by the editors that perhaps focuses a little too little an the database side but overall I believe this book to be very useful indeed. Some of the indexing is a little idiosyncratic, for example some tags used in the examples are indexed (perhaps a separate examples index would be better) and Ron Bourret's excellent web site might be better placed under "Bourret" rather than under "Ron" but this doesn't really detract from the book's qualities. The broad spectrum and careful balance of theory and practice is a combination that both database and XML professionals will find valuable."
  13. Current theory in library and information science (2002) 0.01
    0.013483594 = product of:
      0.026967188 = sum of:
        0.026967188 = product of:
          0.053934377 = sum of:
            0.053934377 = weight(_text_:e.g in 822) [ClassicSimilarity], result of:
              0.053934377 = score(doc=822,freq=8.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.23055404 = fieldWeight in 822, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.015625 = fieldNorm(doc=822)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in JASIST 54(2003) no.4, S.358-359 (D.O. Case): "Having recently written a chapter an theories applied in information-seeking research (Case, 2002), I was eager to read this issue of Library Trends devoted to "Current Theory." Once in hand I found the individual articles in the issue to be of widely varying quality, and the scope to be disappointingly narrow. A more accurate title might be "Some Articles about Theory, with Even More an Bibliometrics." Eight of the thirteen articles (not counting the Editor's brief introduction) are about quantifying the growth, quality and/or authorship of literature (mostly in the sciences, with one example from the humanities). Social and psychological theories are hardly mentioned-even though one of the articles claims that nearly half of all theory invoked in LIS emanates from the social sciences. The editor, SUNY Professor Emeritus William E. McGrath, claims that the first six articles are about theory, while the rest are original research that applies theory to some problem-a characterization that I find odd. Reading his Introduction provides some clues to the curious composition of this issue. McGrath states that only in "physics and other exact sciences" are definitions of theory "well understood" (p. 309)-a view I think most psychologists and sociologists would content-and restricts his own definition of theory to "an explanation for a quantifiable phenomenon" (p. 310). In his own chapter in the issue, "Explanation and Prediction," McGrath makes it clear that he holds out hope for a "unified theory of librarianship" that would resemble those regarding "fundamental forces in physics and astronomy." However, isn't it wishful thinking to hope for a physics-like theory to emerge from particular practices (e.g., citation) and settings (e.g., libraries) when broad generalizations do not easily accrue from observation of more basic human behaviors? Perhaps this is where the emphasis an documents, rather than people, entered into the choice of material for "Current Theory." Artifacts of human behavior, such as documents, are more amenable to prediction in ways that allow for the development of theorywitness Zipf's Principle of Least Effort, the Bradford Distribution, Lotka's Law, etc. I imagine that McGrath would say that "librarianship," at least, is more about materials than people. McGrath's own contribution to this issue emphasizes measures of libraries, books and journals. By citing exemplar studies, he makes it clear that much has been done to advance measurement of library operations, and he eloquently argues for an overarching view of the various library functions and their measures. But, we have all heard similar arguments before; other disciplines, in earlier times, have made the argument that a solid foundation of empirical observation had been laid down, which would lead inevitably to a grand theory of "X." McGrath admits that "some may say the vision [of a unified theory] is naive" (p. 367), but concludes that "It remains for researchers to tie the various level together more formally . . . in constructing a comprehensive unified theory of librarianship."
    However, for well over a century, major libraries in developed nations have been engaging in sophisticated measure of their operations, and thoughtful scholars have been involved along the way; if no "unified theory" has emerged thus far, why would it happen in the near future? What if "libraries" are a historicallydetermined conglomeration of distinct functions, some of which are much less important than others? It is telling that McGrath cites as many studies an brittle paper as he does investigations of reference services among his constellation of measurable services, even while acknowledging that the latter (as an aspect of "circulation") is more "essential." If one were to include in a unified theory similar phenomena outside of libraries-e.g., what happens in bookstores and WWW searches-it can be seen how difficult a coordinated explanation might become. Ultimately the value of McGrath's chapter is not in convincing the reader that a unified theory might emerge, but rather in highlighting the best in recent studies that examine library operations, identifying robust conclusions, and arguing for the necessity of clarifying and coordinating common variables and units of analysis. McGrath's article is one that would be useful for a general course in LIS methodology, and certainly for more specific lectures an the evaluation of libraries. Fra going to focus most of my comments an the remaining articles about theory, rather than the others that offer empirical results about the growth or quality of literature. I'll describe the latter only briefly. The best way to approach this issue is by first reading McKechnie and Pettigrew's thorough survey of the "Use of Theory in LIS research." Earlier results of their extensive content analysis of 1, 160 LIS articles have been published in other journals before, but is especially pertinent here. These authors find that only a third of LIS literature makes overt reference to theory, and that both usage and type of theory are correlated with the specific domain of the research (e.g., historical treatments versus user studies versus information retrieval). Lynne McKechnie and Karen Pettigrew identify four general sources of theory: LIS, the Humanities, Social Sciences and Sciences. This approach makes it obvious that the predominant source of theory is the social sciences (45%), followed by LIS (30%), the sciences (19%) and the humanities (5%) - despite a predominance (almost 60%) of articles with science-related content. The authors discuss interdisciplinarity at some length, noting the great many non-LIS authors and theories which appear in the LIS literature, and the tendency for native LIS theories to go uncited outside of the discipline. Two other articles emphasize the ways in which theory has evolved. The more general of three two is Jack Glazier and Robert Grover's update of their classic 1986 Taxonomy of Theory in LIS. This article describes an elaborated version, called the "Circuits of Theory," offering definitions of a hierarchy of terms ranging from "world view" through "paradigm," "grand theory" and (ultimately) "symbols." Glazier & Grover's one-paragraph example of how theory was applied in their study of city managers is much too brief and is at odds with the emphasis an quantitative indicators of literature found in the rest of the volume. The second article about the evolution of theory, Richard Smiraglia's "The progress of theory in knowledge organization," restricts itself to the history of thinking about cataloging and indexing. Smiraglia traces the development of theory from a pragmatic concern with "what works," to a reliance an empirical tests, to an emerging flirtation with historicist approaches to knowledge.
  14. ¬The Eleventh Text Retrieval Conference, TREC 2002 (2003) 0.01
    0.01215095 = product of:
      0.0243019 = sum of:
        0.0243019 = product of:
          0.0486038 = sum of:
            0.0486038 = weight(_text_:22 in 4049) [ClassicSimilarity], result of:
              0.0486038 = score(doc=4049,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.30952093 = fieldWeight in 4049, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4049)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Proceedings of the llth TREC-conference held in Gaithersburg, Maryland (USA), November 19-22, 2002. Aim of the conference was discussion an retrieval and related information-seeking tasks for large test collection. 93 research groups used different techniques, for information retrieval from the same large database. This procedure makes it possible to compare the results. The tasks are: Cross-language searching, filtering, interactive searching, searching for novelty, question answering, searching for video shots, and Web searching.
  15. Wissen - Innovation - Netzwerke : Wege zur Zukunftsfähigkeit (2003) 0.01
    0.010632081 = product of:
      0.021264162 = sum of:
        0.021264162 = product of:
          0.042528324 = sum of:
            0.042528324 = weight(_text_:22 in 1391) [ClassicSimilarity], result of:
              0.042528324 = score(doc=1391,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.2708308 = fieldWeight in 1391, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1391)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2008 14:48:44
  16. Information Macht Bildung. : Zweiter Gemeinsamer Kongress der Bundesvereinigung Deutscher Bibliotheksverbände e. V. (BDB) und der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis e. V. (DGI), Leipzig, 23. bis 26. März 2004, zugleich 93. Deutscher Bibliothekartag (2004) 0.01
    0.010632081 = product of:
      0.021264162 = sum of:
        0.021264162 = product of:
          0.042528324 = sum of:
            0.042528324 = weight(_text_:22 in 3018) [ClassicSimilarity], result of:
              0.042528324 = score(doc=3018,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.2708308 = fieldWeight in 3018, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3018)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 2.2008 14:21:53
  17. Handbuch der Künstlichen Intelligenz (2003) 0.01
    0.010632081 = product of:
      0.021264162 = sum of:
        0.021264162 = product of:
          0.042528324 = sum of:
            0.042528324 = weight(_text_:22 in 2916) [ClassicSimilarity], result of:
              0.042528324 = score(doc=2916,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.2708308 = fieldWeight in 2916, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2916)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    21. 3.2008 19:10:22
  18. Dokumente und Datenbanken in elektronischen Netzen : Tagungsberichte vom 6. und 7. Österreichischen Online-Informationstreffen bzw. vom 7. und 8. Österreichischen Dokumentartag, Schloß Seggau, Seggauberg bei Leibnitz, 26.-29. September 1995, Congresszentrum Igls bei Innsbruck, 21.-24. Oktober 1997 (2000) 0.01
    0.009113212 = product of:
      0.018226424 = sum of:
        0.018226424 = product of:
          0.03645285 = sum of:
            0.03645285 = weight(_text_:22 in 4911) [ClassicSimilarity], result of:
              0.03645285 = score(doc=4911,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.23214069 = fieldWeight in 4911, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4911)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2000 16:34:40
  19. Software for Indexing (2003) 0.01
    0.008427247 = product of:
      0.016854495 = sum of:
        0.016854495 = product of:
          0.03370899 = sum of:
            0.03370899 = weight(_text_:e.g in 2294) [ClassicSimilarity], result of:
              0.03370899 = score(doc=2294,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.14409629 = fieldWeight in 2294, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=2294)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    A chapter an image indexing starts with a useful discussion of the elements of bibliographic description needed for visual materials and of the variations in the functioning and naming of functions in different software packaltes. Sample features are discussed in light of four different software systems: MAVIS, Convera Screening Room, CONTENTdm, and Virage speech and pattern recognition programs. The chapter concludes with an overview of what one has to consider when choosing a system. The last chapter in this section is an oddball one an creating a back-ofthe-book index using Microsoft Excel. The author warns: "It is not pretty, and it is not recommended" (p.209). A curiosity, but it should have been included as a counterpoint in the first part, not as part of the database indexing section. The final section begins with an excellent article an voice recognition software (Dragon Naturally Speaking Preferred), followed by a look at "automatic indexing" through a critique of Sonar Bookends Automatic Indexing Generator. The final two chapters deal with Data Harmony's Machine Aided Indexer; one of them refers specifically to a news content indexing system. In terms of scope, this reviewer would have liked to see thesaurus management software included since thesaurus management and the integration of thesauri with database indexing software are common and time-consuming concerns. There are also a few editorial glitches, such as the placement of the oddball article and inconsistent uses of fonts and caps (eg: VIRAGE and Virage), but achieving consistency with this many authors is, indeed, a difficult task. More serious is the fact that the index is inconsistent. It reads as if authors submitted their own keywords which were then harmonized, so that the level of indexing varies by chapter. For example, there is an entry for "controlled vocabulary" (p.265) (singular) with one locator, no cross-references. There is an entry for "thesaurus software" (p.274) with two locators, plus a separate one for "Thesaurus Master" (p.274) with three locators. There are also references to thesauri/ controlled vocabularies/taxonomies that are not mentioned in the index (e.g., the section Thesaurus management an p.204). This is sad. All too often indexing texts have poor indexes, I suppose because we are as prone to having to work under time pressures as the rest of the authors and editors in the world. But a good index that meets basic criteria should be a highlight in any book related to indexing. Overall this is a useful, if uneven, collection of articles written over the past few years. Because of the great variation between articles both in subject and in approach, there is something for everyone. The collection will be interesting to anyone who wants to be aware of how indexing software works and what it can do. I also definitely recommend it for information science teaching collections since the explanations of the software carry implicit in them descriptions of how the indexing process itself is approached. However, the book's utility as a guide to purchasing choices is limited because of the unevenness; the vendor-written articles and testimonials are interesting and can certainly be helpful, but there are not nearly enough objective reviews. This is not a straight listing and comparison of software packaltes, but it deserves wide circulation since it presents an overall picture of the state of indexing software used by freelancers."
  20. Wege des Knowledge Managements : Wissen in Aktion (2000) 0.01
    0.0075943437 = product of:
      0.0151886875 = sum of:
        0.0151886875 = product of:
          0.030377375 = sum of:
            0.030377375 = weight(_text_:22 in 994) [ClassicSimilarity], result of:
              0.030377375 = score(doc=994,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.19345059 = fieldWeight in 994, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=994)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Issue
    22. Online-Tagung der DGI, Frankfurt am Main, 2.-4.5.2000. Proceedings.

Languages

  • e 34
  • d 18
  • m 2
  • es 1
  • More… Less…

Subjects

Classifications