Search (17 results, page 1 of 1)

  • × classification_ss:"06.74 / Informationssysteme"
  1. Information visualization in data mining and knowledge discovery (2002) 0.05
    0.047730967 = product of:
      0.07159645 = sum of:
        0.03205038 = weight(_text_:reference in 1789) [ClassicSimilarity], result of:
          0.03205038 = score(doc=1789,freq=6.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.15570983 = fieldWeight in 1789, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.015625 = fieldNorm(doc=1789)
        0.039546072 = sum of:
          0.025836568 = weight(_text_:database in 1789) [ClassicSimilarity], result of:
            0.025836568 = score(doc=1789,freq=4.0), product of:
              0.20452234 = queryWeight, product of:
                4.042444 = idf(docFreq=2109, maxDocs=44218)
                0.050593734 = queryNorm
              0.12632638 = fieldWeight in 1789, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.042444 = idf(docFreq=2109, maxDocs=44218)
                0.015625 = fieldNorm(doc=1789)
          0.013709505 = weight(_text_:22 in 1789) [ClassicSimilarity], result of:
            0.013709505 = score(doc=1789,freq=2.0), product of:
              0.17717063 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050593734 = queryNorm
              0.07738023 = fieldWeight in 1789, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.015625 = fieldNorm(doc=1789)
      0.6666667 = coord(2/3)
    
    Date
    23. 3.2008 19:10:22
    Footnote
    Rez. in: JASIST 54(2003) no.9, S.905-906 (C.A. Badurek): "Visual approaches for knowledge discovery in very large databases are a prime research need for information scientists focused an extracting meaningful information from the ever growing stores of data from a variety of domains, including business, the geosciences, and satellite and medical imagery. This work presents a summary of research efforts in the fields of data mining, knowledge discovery, and data visualization with the goal of aiding the integration of research approaches and techniques from these major fields. The editors, leading computer scientists from academia and industry, present a collection of 32 papers from contributors who are incorporating visualization and data mining techniques through academic research as well application development in industry and government agencies. Information Visualization focuses upon techniques to enhance the natural abilities of humans to visually understand data, in particular, large-scale data sets. It is primarily concerned with developing interactive graphical representations to enable users to more intuitively make sense of multidimensional data as part of the data exploration process. It includes research from computer science, psychology, human-computer interaction, statistics, and information science. Knowledge Discovery in Databases (KDD) most often refers to the process of mining databases for previously unknown patterns and trends in data. Data mining refers to the particular computational methods or algorithms used in this process. The data mining research field is most related to computational advances in database theory, artificial intelligence and machine learning. This work compiles research summaries from these main research areas in order to provide "a reference work containing the collection of thoughts and ideas of noted researchers from the fields of data mining and data visualization" (p. 8). It addresses these areas in three main sections: the first an data visualization, the second an KDD and model visualization, and the last an using visualization in the knowledge discovery process. The seven chapters of Part One focus upon methodologies and successful techniques from the field of Data Visualization. Hoffman and Grinstein (Chapter 2) give a particularly good overview of the field of data visualization and its potential application to data mining. An introduction to the terminology of data visualization, relation to perceptual and cognitive science, and discussion of the major visualization display techniques are presented. Discussion and illustration explain the usefulness and proper context of such data visualization techniques as scatter plots, 2D and 3D isosurfaces, glyphs, parallel coordinates, and radial coordinate visualizations. Remaining chapters present the need for standardization of visualization methods, discussion of user requirements in the development of tools, and examples of using information visualization in addressing research problems.
    In 13 chapters, Part Two provides an introduction to KDD, an overview of data mining techniques, and examples of the usefulness of data model visualizations. The importance of visualization throughout the KDD process is stressed in many of the chapters. In particular, the need for measures of visualization effectiveness, benchmarking for identifying best practices, and the use of standardized sample data sets is convincingly presented. Many of the important data mining approaches are discussed in this complementary context. Cluster and outlier detection, classification techniques, and rule discovery algorithms are presented as the basic techniques common to the KDD process. The potential effectiveness of using visualization in the data modeling process are illustrated in chapters focused an using visualization for helping users understand the KDD process, ask questions and form hypotheses about their data, and evaluate the accuracy and veracity of their results. The 11 chapters of Part Three provide an overview of the KDD process and successful approaches to integrating KDD, data mining, and visualization in complementary domains. Rhodes (Chapter 21) begins this section with an excellent overview of the relation between the KDD process and data mining techniques. He states that the "primary goals of data mining are to describe the existing data and to predict the behavior or characteristics of future data of the same type" (p. 281). These goals are met by data mining tasks such as classification, regression, clustering, summarization, dependency modeling, and change or deviation detection. Subsequent chapters demonstrate how visualization can aid users in the interactive process of knowledge discovery by graphically representing the results from these iterative tasks. Finally, examples of the usefulness of integrating visualization and data mining tools in the domain of business, imagery and text mining, and massive data sets are provided. This text concludes with a thorough and useful 17-page index and lengthy yet integrating 17-page summary of the academic and industrial backgrounds of the contributing authors. A 16-page set of color inserts provide a better representation of the visualizations discussed, and a URL provided suggests that readers may view all the book's figures in color on-line, although as of this submission date it only provides access to a summary of the book and its contents. The overall contribution of this work is its focus an bridging two distinct areas of research, making it a valuable addition to the Morgan Kaufmann Series in Database Management Systems. The editors of this text have met their main goal of providing the first textbook integrating knowledge discovery, data mining, and visualization. Although it contributes greatly to our under- standing of the development and current state of the field, a major weakness of this text is that there is no concluding chapter to discuss the contributions of the sum of these contributed papers or give direction to possible future areas of research. "Integration of expertise between two different disciplines is a difficult process of communication and reeducation. Integrating data mining and visualization is particularly complex because each of these fields in itself must draw an a wide range of research experience" (p. 300). Although this work contributes to the crossdisciplinary communication needed to advance visualization in KDD, a more formal call for an interdisciplinary research agenda in a concluding chapter would have provided a more satisfying conclusion to a very good introductory text.
    With contributors almost exclusively from the computer science field, the intended audience of this work is heavily slanted towards a computer science perspective. However, it is highly readable and provides introductory material that would be useful to information scientists from a variety of domains. Yet, much interesting work in information visualization from other fields could have been included giving the work more of an interdisciplinary perspective to complement their goals of integrating work in this area. Unfortunately, many of the application chapters are these, shallow, and lack complementary illustrations of visualization techniques or user interfaces used. However, they do provide insight into the many applications being developed in this rapidly expanding field. The authors have successfully put together a highly useful reference text for the data mining and information visualization communities. Those interested in a good introduction and overview of complementary research areas in these fields will be satisfied with this collection of papers. The focus upon integrating data visualization with data mining complements texts in each of these fields, such as Advances in Knowledge Discovery and Data Mining (Fayyad et al., MIT Press) and Readings in Information Visualization: Using Vision to Think (Card et. al., Morgan Kauffman). This unique work is a good starting point for future interaction between researchers in the fields of data visualization and data mining and makes a good accompaniment for a course focused an integrating these areas or to the main reference texts in these fields."
  2. Farkas, M.G.: Social software in libraries : building collaboration, communication, and community online (2007) 0.03
    0.03205038 = product of:
      0.096151136 = sum of:
        0.096151136 = weight(_text_:reference in 2364) [ClassicSimilarity], result of:
          0.096151136 = score(doc=2364,freq=6.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.4671295 = fieldWeight in 2364, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.046875 = fieldNorm(doc=2364)
      0.33333334 = coord(1/3)
    
    Content
    Inhalt: What is social software? -- Blogs -- Blogs in libraries : practical applications -- RSS -- Wikis -- Online communities -- Social networking -- Social bookmarking and collaborative filtering -- Tools for synchronous online reference -- The mobile revolution -- Podcasting -- Screencasting and vodcasting -- Gaming -- What will work @ your library -- Keeping up : a primer -- Future trends in social software.
    LCSH
    Electronic reference services (Libraries)
    Subject
    Electronic reference services (Libraries)
  3. Multimedia content and the Semantic Web : methods, standards, and tools (2005) 0.03
    0.025314227 = product of:
      0.03797134 = sum of:
        0.023130367 = weight(_text_:reference in 150) [ClassicSimilarity], result of:
          0.023130367 = score(doc=150,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.11237389 = fieldWeight in 150, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.01953125 = fieldNorm(doc=150)
        0.014840975 = product of:
          0.02968195 = sum of:
            0.02968195 = weight(_text_:22 in 150) [ClassicSimilarity], result of:
              0.02968195 = score(doc=150,freq=6.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.16753313 = fieldWeight in 150, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=150)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Classification
    006.7 22
    Date
    7. 3.2007 19:30:22
    DDC
    006.7 22
    Footnote
    Rez. in: JASIST 58(2007) no.3, S.457-458 (A.M.A. Ahmad): "The concept of the semantic web has emerged because search engines and text-based searching are no longer adequate, as these approaches involve an extensive information retrieval process. The deployed searching and retrieving descriptors arc naturally subjective and their deployment is often restricted to the specific application domain for which the descriptors were configured. The new era of information technology imposes different kinds of requirements and challenges. Automatic extracted audiovisual features are required, as these features are more objective, domain-independent, and more native to audiovisual content. This book is a useful guide for researchers, experts, students, and practitioners; it is a very valuable reference and can lead them through their exploration and research in multimedia content and the semantic web. The book is well organized, and introduces the concept of the semantic web and multimedia content analysis to the reader through a logical sequence from standards and hypotheses through system examples, presenting relevant tools and methods. But in some chapters readers will need a good technical background to understand some of the details. Readers may attain sufficient knowledge here to start projects or research related to the book's theme; recent results and articles related to the active research area of integrating multimedia with semantic web technologies are included. This book includes full descriptions of approaches to specific problem domains such as content search, indexing, and retrieval. This book will be very useful to researchers in the multimedia content analysis field who wish to explore the benefits of emerging semantic web technologies in applying multimedia content approaches. The first part of the book covers the definition of the two basic terms multimedia content and semantic web. The Moving Picture Experts Group standards MPEG7 and MPEG21 are quoted extensively. In addition, the means of multimedia content description are elaborated upon and schematically drawn. This extensive description is introduced by authors who are actively involved in those standards and have been participating in the work of the International Organization for Standardization (ISO)/MPEG for many years. On the other hand, this results in bias against the ad hoc or nonstandard tools for multimedia description in favor of the standard approaches. This is a general book for multimedia content; more emphasis on the general multimedia description and extraction could be provided.
  4. Proceedings of the Second ACM/IEEE-CS Joint Conference on Digital Libraries : July 14 - 18, 2002, Portland, Oregon, USA. (2002) 0.02
    0.018425934 = product of:
      0.0276389 = sum of:
        0.018504294 = weight(_text_:reference in 172) [ClassicSimilarity], result of:
          0.018504294 = score(doc=172,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.08989911 = fieldWeight in 172, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.015625 = fieldNorm(doc=172)
        0.0091346055 = product of:
          0.018269211 = sum of:
            0.018269211 = weight(_text_:database in 172) [ClassicSimilarity], result of:
              0.018269211 = score(doc=172,freq=2.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.08932624 = fieldWeight in 172, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.015625 = fieldNorm(doc=172)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    SESSION: A digital libraries for education Middle school children's use of the ARTEMIS digital library (June Abbas, Cathleen Norris, Elliott Soloway) - Partnership reviewing: a cooperative approach for peer review of complex educational resources (John Weatherley, Tamara Sumner, Michael Khoo, Michael Wright, Marcel Hoffmann) - A digital library for geography examination resources (Lian-Heong Chua, Dion Hoe-Lian Goh, Ee-Peng Lim, Zehua Liu, Rebecca Pei-Hui Ang) - Digital library services for authors of learning materials (Flora McMartin, Youki Terada) SESSION: Novel search environments Integration of simultaneous searching and reference linking across bibliographic resources on the web (William H. Mischo, Thomas G. Habing, Timothy W. Cole) - Exploring discussion lists: steps and directions (Paula S. Newman) - Comparison of two approaches to building a vertical search tool: a case study in the nanotechnology domain (Michael Chau, Hsinchun Chen, Jialun Qin, Yilu Zhou, Yi Qin, Wai-Ki Sung, Daniel McDonald) SESSION: Video and multimedia digital libraries A multilingual, multimodal digital video library system (Michael R. Lyu, Edward Yau, Sam Sze) - A digital library data model for music (Natalia Minibayeva, Jon W. Dunn) - Video-cuebik: adapting image search to video shots (Alexander G. Hauptmann, Norman D. Papernick) - Virtual multimedia libraries built from the web (Neil C. Rowe) - Multi-modal information retrieval from broadcast video using OCR and speech recognition (Alexander G. Hauptmann, Rong Jin, Tobun Dorbin Ng) SESSION: OAI application Extending SDARTS: extracting metadata from web databases and interfacing with the open archives initiative (Panagiotis G. Ipeirotis, Tom Barry, Luis Gravano) - Using the open archives initiative protocols with EAD (Christopher J. Prom, Thomas G. Habing) - Preservation and transition of NCSTRL using an OAI-based architecture (H. Anan, X. Liu, K. Maly, M. Nelson, M. Zubair, J. C. French, E. Fox, P. Shivakumar) - Integrating harvesting into digital library content (David A. Smith, Anne Mahoney, Gregory Crane) SESSION: Searching across language, time, and space Harvesting translingual vocabulary mappings for multilingual digital libraries (Ray R. Larson, Fredric Gey, Aitao Chen) - Detecting events with date and place information in unstructured text (David A. Smith) - Using sharable ontology to retrieve historical images (Von-Wun Soo, Chen-Yu Lee, Jaw Jium Yeh, Ching-chih Chen) - Towards an electronic variorum edition of Cervantes' Don Quixote:: visualizations that support preparation (Rajiv Kochumman, Carlos Monroy, Richard Furuta, Arpita Goenka, Eduardo Urbina, Erendira Melgoza)
    SESSION: Federating and harvesting metadata DP9: an OAI gateway service for web crawlers (Xiaoming Liu, Kurt Maly, Mohammad Zubair, Michael L. Nelson) - The Greenstone plugin architecture (Ian H. Witten, David Bainbridge, Gordon Paynter, Stefan Boddie) - Building FLOW: federating libraries on the web (Anna Keller Gold, Karen S. Baker, Jean-Yves LeMeur, Kim Baldridge) - JAFER ToolKit project: interfacing Z39.50 and XML (Antony Corfield, Matthew Dovey, Richard Mawby, Colin Tatham) - Schema extraction from XML collections (Boris Chidlovskii) - Mirroring an OAI archive on the I2-DSI channel (Ashwini Pande, Malini Kothapalli, Ryan Richardson, Edward A. Fox) SESSION: Music digital libraries HMM-based musical query retrieval (Jonah Shifrin, Bryan Pardo, Colin Meek, William Birmingham) - A comparison of melodic database retrieval techniques using sung queries (Ning Hu, Roger B. Dannenberg) - Enhancing access to the levy sheet music collection: reconstructing full-text lyrics from syllables (Brian Wingenroth, Mark Patton, Tim DiLauro) - Evaluating automatic melody segmentation aimed at music information retrieval (Massimo Melucci, Nicola Orio) SESSION: Preserving, securing, and assessing digital libraries A methodology and system for preserving digital data (Raymond A. Lorie) - Modeling web data (James C. French) - An evaluation model for a digital library services tool (Jim Dorward, Derek Reinke, Mimi Recker) - Why watermark?: the copyright need for an engineering solution (Michael Seadle, J. R. Deller, Jr., Aparna Gurijala) SESSION: Image and cultural digital libraries Time as essence for photo browsing through personal digital libraries (Adrian Graham, Hector Garcia-Molina, Andreas Paepcke, Terry Winograd) - Toward a distributed terabyte text retrieval system in China-US million book digital library (Bin Liu, Wen Gao, Ling Zhang, Tie-jun Huang, Xiao-ming Zhang, Jun Cheng) - Enhanced perspectives for historical and cultural documentaries using informedia technologies (Howard D. Wactlar, Ching-chih Chen) - Interfaces for palmtop image search (Mark Derthick)
  5. Net effects : how librarians can manage the unintended consequenees of the Internet (2003) 0.01
    0.01068346 = product of:
      0.03205038 = sum of:
        0.03205038 = weight(_text_:reference in 1796) [ClassicSimilarity], result of:
          0.03205038 = score(doc=1796,freq=6.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.15570983 = fieldWeight in 1796, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.015625 = fieldNorm(doc=1796)
      0.33333334 = coord(1/3)
    
    Footnote
    Some of the pieces are more captivating than others and less "how-to" in nature, providing contextual discussions as well as pragmatic advice. For example, Darlene Fichter's "Blogging Your Life Away" is an interesting discussion about creating and maintaining blogs. (For those unfamiliar with the term, blogs are frequently updated Web pages that ]ist thematically tied annotated links or lists, such as a blog of "Great Websites of the Week" or of "Fun Things to Do This Month in Patterson, New Jersey.") Fichter's article includes descriptions of sample blogs and a comparison of commercially available blog creation software. Another article of note is Kelly Broughton's detailed account of her library's experiences in initiating Web-based reference in an academic library. "Our Experiment in Online Real-Time Reference" details the decisions and issues that the Jerome Library staff at Bowling Green State University faced in setting up a chat reference service. It might be useful to those finding themselves in the same situation. This volume is at its best when it eschews pragmatic information and delves into the deeper, less ephemeral libraryrelated issues created by the rise of the Internet and of the Web. One of the most thought-provoking topics covered is the issue of "the serials pricing crisis," or the increase in subscription prices to journals that publish scholarly work. The pros and cons of moving toward a more free-access Web-based system for the dissemination of peer-reviewed material and of using university Web sites to house scholars' other works are discussed. However, deeper discussions such as these are few, leaving the volume subject to rapid aging, and leaving it with an audience limited to librarians looking for fast technological fixes."
  6. Broughton, V.: Essential thesaurus construction (2006) 0.01
    0.0087230075 = product of:
      0.026169023 = sum of:
        0.026169023 = weight(_text_:reference in 2924) [ClassicSimilarity], result of:
          0.026169023 = score(doc=2924,freq=4.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.12713654 = fieldWeight in 2924, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.015625 = fieldNorm(doc=2924)
      0.33333334 = coord(1/3)
    
    Footnote
    Rez. in: Mitt. VÖB 60(2007) H.1, S.98-101 (O. Oberhauser): "Die Autorin von Essential thesaurus construction (and essential taxonomy construction, so der implizite Untertitel, vgl. S. 1) ist durch ihre Lehrtätigkeit an der bekannten School of Library, Archive and Information Studies des University College London und durch ihre bisherigen Publikationen auf den Gebieten (Facetten-)Klassifikation und Thesaurus fachlich einschlägig ausgewiesen. Nach Essential classification liegt nun ihr Thesaurus-Lehrbuch vor, mit rund 200 Seiten Text und knapp 100 Seiten Anhang ein handliches Werk, das seine Genese zum Grossteil dem Lehrbetrieb verdankt, wie auch dem kurzen Einleitungskapitel zu entnehmen ist. Das Buch ist der Schule von Jean Aitchison et al. verpflichtet und wendet sich an "the indexer" im weitesten Sinn, d.h. an alle Personen, die ein strukturiertes, kontrolliertes Fachvokabular für die Zwecke der sachlichen Erschliessung und Suche erstellen wollen bzw. müssen. Es möchte dieser Zielgruppe das nötige methodische Rüstzeug für eine solche Aufgabe vermitteln, was einschliesslich der Einleitung und der Schlussbemerkungen in zwanzig Kapiteln geschieht - eine ansprechende Strukturierung, die ein wohldosiertes Durcharbeiten möglich macht. Zu letzterem tragen auch die von der Autorin immer wieder gestellten Übungsaufgaben bei (Lösungen jeweils am Kapitelende). Zu Beginn der Darstellung wird der "information retrieval thesaurus" von dem (zumindest im angelsächsischen Raum) weit öfter mit dem Thesaurusbegriff assoziierten "reference thesaurus" abgegrenzt, einem nach begrifflicher Ähnlichkeit angeordneten Synonymenwörterbuch, das gerne als Mittel zur stilistischen Verbesserung beim Abfassen von (wissenschaftlichen) Arbeiten verwendet wird. Ohne noch ins Detail zu gehen, werden optische Erscheinungsform und Anwendungsgebiete von Thesauren vorgestellt, der Thesaurus als postkoordinierte Indexierungssprache erläutert und seine Nähe zu facettierten Klassifikationssystemen erwähnt. In der Folge stellt Broughton die systematisch organisierten Systeme (Klassifikation/ Taxonomie, Begriffs-/Themendiagramme, Ontologien) den alphabetisch angeordneten, wortbasierten (Schlagwortlisten, thesaurusartige Schlagwortsysteme und Thesauren im eigentlichen Sinn) gegenüber, was dem Leser weitere Einordnungshilfen schafft. Die Anwendungsmöglichkeiten von Thesauren als Mittel der Erschliessung (auch als Quelle für Metadatenangaben bei elektronischen bzw. Web-Dokumenten) und der Recherche (Suchformulierung, Anfrageerweiterung, Browsing und Navigieren) kommen ebenso zur Sprache wie die bei der Verwendung natürlichsprachiger Indexierungssysteme auftretenden Probleme. Mit Beispielen wird ausdrücklich auf die mehr oder weniger starke fachliche Spezialisierung der meisten dieser Vokabularien hingewiesen, wobei auch Informationsquellen über Thesauren (z.B. www.taxonomywarehouse.com) sowie Thesauren für nicht-textuelle Ressourcen kurz angerissen werden.
    Weitere Rez. in: New Library World 108(2007) nos.3/4, S.190-191 (K.V. Trickey): "Vanda has provided a very useful work that will enable any reader who is prepared to follow her instruction to produce a thesaurus that will be a quality language-based subject access tool that will make the task of information retrieval easier and more effective. Once again I express my gratitude to Vanda for producing another excellent book." - Electronic Library 24(2006) no.6, S.866-867 (A.G. Smith): "Essential thesaurus construction is an ideal instructional text, with clear bullet point summaries at the ends of sections, and relevant and up to date references, putting thesauri in context with the general theory of information retrieval. But it will also be a valuable reference for any information professional developing or using a controlled vocabulary." - KO 33(2006) no.4, S.215-216 (M.P. Satija)
  7. Research and advanced technology for digital libraries : 11th European conference, ECDL 2007 / Budapest, Hungary, September 16-21, 2007, proceedings (2007) 0.01
    0.008612189 = product of:
      0.025836568 = sum of:
        0.025836568 = product of:
          0.051673137 = sum of:
            0.051673137 = weight(_text_:database in 2430) [ClassicSimilarity], result of:
              0.051673137 = score(doc=2430,freq=4.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.25265276 = fieldWeight in 2430, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2430)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    LCSH
    Database management
    Subject
    Database management
  8. TREC: experiment and evaluation in information retrieval (2005) 0.01
    0.0077101225 = product of:
      0.023130367 = sum of:
        0.023130367 = weight(_text_:reference in 636) [ClassicSimilarity], result of:
          0.023130367 = score(doc=636,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.11237389 = fieldWeight in 636, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.01953125 = fieldNorm(doc=636)
      0.33333334 = coord(1/3)
    
    Footnote
    ... TREC: Experiment and Evaluation in Information Retrieval is a reliable and comprehensive review of the TREC program and has been adopted by NIST as the official history of TREC (see http://trec.nist.gov). We were favorably surprised by the book. Well structured and written, chapters are self-contained and the existence of references to specialized and more detailed publications is continuous, which makes it easier to expand into the different aspects analyzed in the text. This book succeeds in compiling TREC evolution from its inception in 1992 to 2003 in an adequate and manageable volume. Thanks to the impressive effort performed by the authors and their experience in the field, it can satiate the interests of a great variety of readers. While expert researchers in the IR field and IR-related industrial companies can use it as a reference manual, it seems especially useful for students and non-expert readers willing to approach this research area. Like NIST, we would recommend this reading to anyone who may be interested in textual information retrieval."
  9. Lavrenko, V.: ¬A generative theory of relevance (2009) 0.01
    0.0076121716 = product of:
      0.022836514 = sum of:
        0.022836514 = product of:
          0.045673028 = sum of:
            0.045673028 = weight(_text_:database in 3306) [ClassicSimilarity], result of:
              0.045673028 = score(doc=3306,freq=2.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.2233156 = fieldWeight in 3306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3306)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    A modern information retrieval system must have the capability to find, organize and present very different manifestations of information - such as text, pictures, videos or database records - any of which may be of relevance to the user. However, the concept of relevance, while seemingly intuitive, is actually hard to define, and it's even harder to model in a formal way. Lavrenko does not attempt to bring forth a new definition of relevance, nor provide arguments as to why any particular definition might be theoretically superior or more complete. Instead, he takes a widely accepted, albeit somewhat conservative definition, makes several assumptions, and from them develops a new probabilistic model that explicitly captures that notion of relevance. With this book, he makes two major contributions to the field of information retrieval: first, a new way to look at topical relevance, complementing the two dominant models, i.e., the classical probabilistic model and the language modeling approach, and which explicitly combines documents, queries, and relevance in a single formalism; second, a new method for modeling exchangeable sequences of discrete random variables which does not make any structural assumptions about the data and which can also handle rare events. Thus his book is of major interest to researchers and graduate students in information retrieval who specialize in relevance modeling, ranking algorithms, and language modeling.
  10. Thissen, F.: Screen-Design-Handbuch : Effektiv informieren und kommunizieren mit Multimedia (2001) 0.01
    0.006854752 = product of:
      0.020564256 = sum of:
        0.020564256 = product of:
          0.041128512 = sum of:
            0.041128512 = weight(_text_:22 in 1781) [ClassicSimilarity], result of:
              0.041128512 = score(doc=1781,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.23214069 = fieldWeight in 1781, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1781)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 3.2008 14:35:21
  11. Theories of information behavior (2005) 0.01
    0.006168098 = product of:
      0.018504294 = sum of:
        0.018504294 = weight(_text_:reference in 68) [ClassicSimilarity], result of:
          0.018504294 = score(doc=68,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.08989911 = fieldWeight in 68, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.015625 = fieldNorm(doc=68)
      0.33333334 = coord(1/3)
    
    Footnote
    Weitere Rez. in: JASIST 58(2007) no.2, S.303 (D.E. Agosto): "Due to the brevity of the entries, they serve more as introductions to a wide array of theories than as deep explorations of a select few. The individual entries are not as deep as those in more traditional reference volumes, such as The Encyclopedia of Library and Information Science (Drake, 2003) or The Annual Review of Information Science and Technology (ARIST) (Cronin, 2005), but the overall coverage is much broader. This volume is probably most useful to doctoral students who are looking for theoretical frameworks for nascent research projects or to more veteran researchers interested in an introductory overview of information behavior research, as those already familiar with this subfield also will probably already be familiar with most of the theories presented here. Since different authors have penned each of the various entries, the writing styles vary somewhat, but on the whole, this is a readable, pithy volume that does an excellent job of encapsulating this important area of information research."
  12. Chu, H.: Information representation and retrieval in the digital age (2010) 0.01
    0.006168098 = product of:
      0.018504294 = sum of:
        0.018504294 = weight(_text_:reference in 92) [ClassicSimilarity], result of:
          0.018504294 = score(doc=92,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.08989911 = fieldWeight in 92, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.015625 = fieldNorm(doc=92)
      0.33333334 = coord(1/3)
    
    Footnote
    Chu's intent with this book is clear throughout the entire text. With this presentation, she writes with the novice in mind or as she puls it in the Preface, "to anyone who is interested in learning about the field, particularly those who are new to it." After reading the text, I found that this book is also an appropriate reference book for those who are somewhat advanced in the field. I found the chapters an information retrieval models and techniques, metadata, and AI very informative in that they contain information that is often rather densely presented in other texts. Although, I must say, the metadata section in Chapter 3 is pretty basic and contains more questions about the area than information. . . . It is an excellent book to have in the classroom, an your bookshelf, etc. It reads very well and is written with the reader in mind. If you are in need of a more advanced or technical text an the subject, this is not the book for you. But, if you are looking for a comprehensive, manual that can be used as a "flip-through," then you are in luck."
  13. Thissen, F.: Screen-Design-Manual : Communicating Effectively Through Multimedia (2003) 0.01
    0.005712294 = product of:
      0.017136881 = sum of:
        0.017136881 = product of:
          0.034273762 = sum of:
            0.034273762 = weight(_text_:22 in 1397) [ClassicSimilarity], result of:
              0.034273762 = score(doc=1397,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.19345059 = fieldWeight in 1397, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1397)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 3.2008 14:29:25
  14. Bleuel, J.: Online Publizieren im Internet : elektronische Zeitschriften und Bücher (1995) 0.01
    0.005712294 = product of:
      0.017136881 = sum of:
        0.017136881 = product of:
          0.034273762 = sum of:
            0.034273762 = weight(_text_:22 in 1708) [ClassicSimilarity], result of:
              0.034273762 = score(doc=1708,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.19345059 = fieldWeight in 1708, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1708)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 3.2008 16:15:37
  15. Medienkompetenz : wie lehrt und lernt man Medienkompetenz? (2003) 0.00
    0.004569835 = product of:
      0.013709505 = sum of:
        0.013709505 = product of:
          0.02741901 = sum of:
            0.02741901 = weight(_text_:22 in 2249) [ClassicSimilarity], result of:
              0.02741901 = score(doc=2249,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.15476047 = fieldWeight in 2249, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2249)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 3.2008 18:05:16
  16. Research and advanced technology for digital libraries : 7th European conference, ECDL2003 Trondheim, Norway, August 17-22, 2003. Proceedings (2003) 0.00
    0.004569835 = product of:
      0.013709505 = sum of:
        0.013709505 = product of:
          0.02741901 = sum of:
            0.02741901 = weight(_text_:22 in 2426) [ClassicSimilarity], result of:
              0.02741901 = score(doc=2426,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.15476047 = fieldWeight in 2426, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2426)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  17. Research and advanced technology for digital libraries : 10th European conference ; proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006 ; proceedings (2006) 0.00
    0.004569835 = product of:
      0.013709505 = sum of:
        0.013709505 = product of:
          0.02741901 = sum of:
            0.02741901 = weight(_text_:22 in 2428) [ClassicSimilarity], result of:
              0.02741901 = score(doc=2428,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.15476047 = fieldWeight in 2428, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2428)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    

Languages

  • e 13
  • d 3

Types

  • m 17
  • s 10

Subjects

Classifications