Search (54 results, page 1 of 3)

  • × classification_ss:"06.74 / Informationssysteme"
  1. Proceedings of the Second ACM/IEEE-CS Joint Conference on Digital Libraries : July 14 - 18, 2002, Portland, Oregon, USA. (2002) 0.04
    0.038101435 = product of:
      0.050801914 = sum of:
        0.021717783 = weight(_text_:da in 172) [ClassicSimilarity], result of:
          0.021717783 = score(doc=172,freq=2.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.10602563 = fieldWeight in 172, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.015625 = fieldNorm(doc=172)
        0.025592657 = product of:
          0.051185314 = sum of:
            0.051185314 = weight(_text_:silva in 172) [ClassicSimilarity], result of:
              0.051185314 = score(doc=172,freq=2.0), product of:
                0.31446302 = queryWeight, product of:
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.04269026 = queryNorm
                0.16277054 = fieldWeight in 172, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.015625 = fieldNorm(doc=172)
          0.5 = coord(1/2)
        0.0034914727 = product of:
          0.0069829454 = sum of:
            0.0069829454 = weight(_text_:a in 172) [ClassicSimilarity], result of:
              0.0069829454 = score(doc=172,freq=62.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.14186095 = fieldWeight in 172, product of:
                  7.8740077 = tf(freq=62.0), with freq of:
                    62.0 = termFreq=62.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.015625 = fieldNorm(doc=172)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Content
    Inhalt: SESSION: Building and using cultural digital libraries Primarily history: historians and the search for primary source materials (Helen R. Tibbo) - Using the Gamera framework for the recognition of cultural heritage materials (Michael Droettboom, Ichiro Fujinaga, Karl MacMillan, G. Sayeed Chouhury, Tim DiLauro, Mark Patton, Teal Anderson) - Supporting access to large digital oral history archives (Samuel Gustman, Dagobert Soergel, Douglas Oard, William Byrne, Michael Picheny, Bhuvana Ramabhadran, Douglas Greenberg) SESSION: Summarization and question answering Using sentence-selection heuristics to rank text segments in TXTRACTOR (Daniel McDonald, Hsinchun Chen) - Using librarian techniques in automatic text summarization for information retrieval (Min-Yen Kan, Judith L. Klavans) - QuASM: a system for question answering using semi-structured data (David Pinto, Michael Branstein, Ryan Coleman, W. Bruce Croft, Matthew King, Wei Li, Xing Wei) SESSION: Studying users Reading-in-the-small: a study of reading on small form factor devices (Catherine C. Marshall, Christine Ruotolo) - A graph-based recommender system for digital library (Zan Huang, Wingyan Chung, Thian-Huat Ong, Hsinchun Chen) - The effects of topic familiarity on information search behavior (Diane Kelly, Colleen Cool) SESSION: Classification and browsing A language modelling approach to relevance profiling for document browsing (David J. Harper, Sara Coulthard, Sun Yixing) - Compound descriptors in context: a matching function for classifications and thesauri (Douglas Tudhope, Ceri Binding, Dorothee Blocks, Daniel Cunliffe) - Structuring keyword-based queries for web databases (Rodrigo C. Vieira, Pavel Calado, Altigran S. da Silva, Alberto H. F. Laender, Berthier A. Ribeiro-Neto) - An approach to automatic classification of text for information retrieval (Hong Cui, P. Bryan Heidorn, Hong Zhang)
    SESSION: A digital libraries for education Middle school children's use of the ARTEMIS digital library (June Abbas, Cathleen Norris, Elliott Soloway) - Partnership reviewing: a cooperative approach for peer review of complex educational resources (John Weatherley, Tamara Sumner, Michael Khoo, Michael Wright, Marcel Hoffmann) - A digital library for geography examination resources (Lian-Heong Chua, Dion Hoe-Lian Goh, Ee-Peng Lim, Zehua Liu, Rebecca Pei-Hui Ang) - Digital library services for authors of learning materials (Flora McMartin, Youki Terada) SESSION: Novel search environments Integration of simultaneous searching and reference linking across bibliographic resources on the web (William H. Mischo, Thomas G. Habing, Timothy W. Cole) - Exploring discussion lists: steps and directions (Paula S. Newman) - Comparison of two approaches to building a vertical search tool: a case study in the nanotechnology domain (Michael Chau, Hsinchun Chen, Jialun Qin, Yilu Zhou, Yi Qin, Wai-Ki Sung, Daniel McDonald) SESSION: Video and multimedia digital libraries A multilingual, multimodal digital video library system (Michael R. Lyu, Edward Yau, Sam Sze) - A digital library data model for music (Natalia Minibayeva, Jon W. Dunn) - Video-cuebik: adapting image search to video shots (Alexander G. Hauptmann, Norman D. Papernick) - Virtual multimedia libraries built from the web (Neil C. Rowe) - Multi-modal information retrieval from broadcast video using OCR and speech recognition (Alexander G. Hauptmann, Rong Jin, Tobun Dorbin Ng) SESSION: OAI application Extending SDARTS: extracting metadata from web databases and interfacing with the open archives initiative (Panagiotis G. Ipeirotis, Tom Barry, Luis Gravano) - Using the open archives initiative protocols with EAD (Christopher J. Prom, Thomas G. Habing) - Preservation and transition of NCSTRL using an OAI-based architecture (H. Anan, X. Liu, K. Maly, M. Nelson, M. Zubair, J. C. French, E. Fox, P. Shivakumar) - Integrating harvesting into digital library content (David A. Smith, Anne Mahoney, Gregory Crane) SESSION: Searching across language, time, and space Harvesting translingual vocabulary mappings for multilingual digital libraries (Ray R. Larson, Fredric Gey, Aitao Chen) - Detecting events with date and place information in unstructured text (David A. Smith) - Using sharable ontology to retrieve historical images (Von-Wun Soo, Chen-Yu Lee, Jaw Jium Yeh, Ching-chih Chen) - Towards an electronic variorum edition of Cervantes' Don Quixote:: visualizations that support preparation (Rajiv Kochumman, Carlos Monroy, Richard Furuta, Arpita Goenka, Eduardo Urbina, Erendira Melgoza)
    SESSION: NSDL Core services in the architecture of the national science digital library (NSDL) (Carl Lagoze, William Arms, Stoney Gan, Diane Hillmann, Christopher Ingram, Dean Krafft, Richard Marisa, Jon Phipps, John Saylor, Carol Terrizzi, Walter Hoehn, David Millman, James Allan, Sergio Guzman-Lara, Tom Kalt) - Creating virtual collections in digital libraries: benefits and implementation issues (Gary Geisler, Sarah Giersch, David McArthur, Marty McClelland) - Ontology services for curriculum development in NSDL (Amarnath Gupta, Bertram Ludäscher, Reagan W. Moore) - Interactive digital library resource information system: a web portal for digital library education (Ahmad Rafee Che Kassim, Thomas R. Kochtanek) SESSION: Digital library communities and change Cross-cultural usability of the library metaphor (Elke Duncker) - Trust and epistemic communities in biodiversity data sharing (Nancy A. Van House) - Evaluation of digital community information systems (K. T. Unruh, K. E. Pettigrew, J. C. Durrance) - Adapting digital libraries to continual evolution (Bruce R. Barkstrom, Melinda Finch, Michelle Ferebee, Calvin Mackey) SESSION: Models and tools for generating digital libraries Localizing experience of digital content via structural metadata (Naomi Dushay) - Collection synthesis (Donna Bergmark) - 5SL: a language for declarative specification and generation of digital libraries (Marcos André, Gonçalves, Edward A. Fox) SESSION: Novel user interfaces A digital library of conversational expressions: helping profoundly disabled users communicate (Hayley Dunlop, Sally Jo Cunningham, Matt Jones) - Enhancing the ENVISION interface for digital libraries (Jun Wang, Abhishek Agrawal, Anil Bazaza, Supriya Angle, Edward A. Fox, Chris North) - A wearable digital library of personal conversations (Wei-hao Lin, Alexander G. Hauptmann) - Collaborative visual interfaces to digital libraries (Katy Börner, Ying Feng, Tamara McMahon) - Binding browsing and reading activities in a 3D digital library (Pierre Cubaud, Pascal Stokowski, Alexandre Topol)
    SESSION: Federating and harvesting metadata DP9: an OAI gateway service for web crawlers (Xiaoming Liu, Kurt Maly, Mohammad Zubair, Michael L. Nelson) - The Greenstone plugin architecture (Ian H. Witten, David Bainbridge, Gordon Paynter, Stefan Boddie) - Building FLOW: federating libraries on the web (Anna Keller Gold, Karen S. Baker, Jean-Yves LeMeur, Kim Baldridge) - JAFER ToolKit project: interfacing Z39.50 and XML (Antony Corfield, Matthew Dovey, Richard Mawby, Colin Tatham) - Schema extraction from XML collections (Boris Chidlovskii) - Mirroring an OAI archive on the I2-DSI channel (Ashwini Pande, Malini Kothapalli, Ryan Richardson, Edward A. Fox) SESSION: Music digital libraries HMM-based musical query retrieval (Jonah Shifrin, Bryan Pardo, Colin Meek, William Birmingham) - A comparison of melodic database retrieval techniques using sung queries (Ning Hu, Roger B. Dannenberg) - Enhancing access to the levy sheet music collection: reconstructing full-text lyrics from syllables (Brian Wingenroth, Mark Patton, Tim DiLauro) - Evaluating automatic melody segmentation aimed at music information retrieval (Massimo Melucci, Nicola Orio) SESSION: Preserving, securing, and assessing digital libraries A methodology and system for preserving digital data (Raymond A. Lorie) - Modeling web data (James C. French) - An evaluation model for a digital library services tool (Jim Dorward, Derek Reinke, Mimi Recker) - Why watermark?: the copyright need for an engineering solution (Michael Seadle, J. R. Deller, Jr., Aparna Gurijala) SESSION: Image and cultural digital libraries Time as essence for photo browsing through personal digital libraries (Adrian Graham, Hector Garcia-Molina, Andreas Paepcke, Terry Winograd) - Toward a distributed terabyte text retrieval system in China-US million book digital library (Bin Liu, Wen Gao, Ling Zhang, Tie-jun Huang, Xiao-ming Zhang, Jun Cheng) - Enhanced perspectives for historical and cultural documentaries using informedia technologies (Howard D. Wactlar, Ching-chih Chen) - Interfaces for palmtop image search (Mark Derthick)
    SESSION: Digital libraries for spatial data The ADEPT digital library architecture (Greg Janée, James Frew) - G-Portal: a map-based digital library for distributed geospatial and georeferenced resources (Ee-Peng Lim, Dion Hoe-Lian Goh, Zehua Liu, Wee-Keong Ng, Christopher Soo-Guan Khoo, Susan Ellen Higgins) PANEL SESSION: Panels You mean I have to do what with whom: statewide museum/library DIGI collaborative digitization projects---the experiences of California, Colorado & North Carolina (Nancy Allen, Liz Bishoff, Robin Chandler, Kevin Cherry) - Overcoming impediments to effective health and biomedical digital libraries (William Hersh, Jan Velterop, Alexa McCray, Gunther Eynsenbach, Mark Boguski) - The challenges of statistical digital libraries (Cathryn Dippo, Patricia Cruse, Ann Green, Carol Hert) - Biodiversity and biocomplexity informatics: policy and implementation science versus citizen science (P. Bryan Heidorn) - Panel on digital preservation (Joyce Ray, Robin Dale, Reagan Moore, Vicky Reich, William Underwood, Alexa T. McCray) - NSDL: from prototype to production to transformational national resource (William Y. Arms, Edward Fox, Jeanne Narum, Ellen Hoffman) - How important is metadata? (Hector Garcia-Molina, Diane Hillmann, Carl Lagoze, Elizabeth Liddy, Stuart Weibel) - Planning for future digital libraries programs (Stephen M. Griffin) DEMONSTRATION SESSION: Demonstrations u.a.: FACET: thesaurus retrieval with semantic term expansion (Douglas Tudhope, Ceri Binding, Dorothee Blocks, Daniel Cunliffe) - MedTextus: an intelligent web-based medical meta-search system (Bin Zhu, Gondy Leroy, Hsinchun Chen, Yongchi Chen) POSTER SESSION: Posters TUTORIAL SESSION: Tutorials u.a.: Thesauri and ontologies in digital libraries: 1. structure and use in knowledge-based assistance to users (Dagobert Soergel) - How to build a digital library using open-source software (Ian H. Witten) - Thesauri and ontologies in digital libraries: 2. design, evaluation, and development (Dagobert Soergel) WORKSHOP SESSION: Workshops Document search interface design for large-scale collections and intelligent access (Javed Mostafa) - Visual interfaces to digital libraries (Katy Börner, Chaomei Chen) - Text retrieval conference (TREC) genomics pre-track workshop (William Hersh)
  2. Franke, F; Klein, A.; Schüller-Zwierlein, A.: Schlüsselkompetenzen : Literatur recherchieren in Bibliotheken und Internet (2010) 0.03
    0.03160042 = product of:
      0.06320084 = sum of:
        0.06142717 = weight(_text_:da in 4721) [ClassicSimilarity], result of:
          0.06142717 = score(doc=4721,freq=4.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.29988578 = fieldWeight in 4721, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.03125 = fieldNorm(doc=4721)
        0.00177367 = product of:
          0.00354734 = sum of:
            0.00354734 = weight(_text_:a in 4721) [ClassicSimilarity], result of:
              0.00354734 = score(doc=4721,freq=4.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.072065435 = fieldWeight in 4721, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4721)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    Pressestimmen So bringt Literatursuche Lust statt Frust. www.literaturmarkt.info Dieser Ratgeber demonstriert Schritt für Schritt, wie man die passende Literatur, z.B. für Seminar- und Hausarbeiten, findet und verarbeitet. www.lehrerbibliothek.de Schlüsselkompetenzen: Literatur recherchieren in Bibliotheken und Internet führt Studenten in die Geheimnisse der Uni-Bibliothek ein. Das Buch liefert zudem Informationen über Online-Kataloge, Datenbanken, die Elektronische Zeitschriftenbibliothek (EZB) und die gängigen Suchmaschinen. www.stellenboersen.de Nützliches und höchst informatives Lehrbuch zu der Schlüsselkompetenz Literaturrecherche. Universitätsbibliothek Freiburg Gutes Recherchieren will gelernt sein, zumal in der heutigen Zeit, da das Informationsangebot ungeahnte Ausmaße angenommen hat und sich viele davon überfordert fühlen. Da nicht alle Hochschulen entsprechende Kurse anbieten, ist es gut, einen kundigen Ratgeber wie diesen zur Hand zu haben. STUDIUM Der Ratgeber besticht durch seinen übersichtlichen Aufbau und eignet sich deshalb auch als Nachschlagewerk. Zahlreiche Beispielrecherchen, Checklisten und Tipps zur Literatursuche erleichtern die Umsetzung des Gelesenen. ph akzente Im Zentrum steht nicht nur die Recherche, sondern ein umfassender Begriff der Informationskompetenz. Germanistik Der preisgünstige Band verdient eine nachdrückliche Kaufempfehlung bei Studierenden (und Lehrenden). Informationsmittel (IFB) : digitales Rezensionsorgan für Bibliotheken und Wissenschaft.
  3. Broughton, V.: Essential thesaurus construction (2006) 0.01
    0.012261101 = product of:
      0.024522202 = sum of:
        0.021717783 = weight(_text_:da in 2924) [ClassicSimilarity], result of:
          0.021717783 = score(doc=2924,freq=2.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.10602563 = fieldWeight in 2924, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.015625 = fieldNorm(doc=2924)
        0.0028044186 = product of:
          0.005608837 = sum of:
            0.005608837 = weight(_text_:a in 2924) [ClassicSimilarity], result of:
              0.005608837 = score(doc=2924,freq=40.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.11394546 = fieldWeight in 2924, product of:
                  6.3245554 = tf(freq=40.0), with freq of:
                    40.0 = termFreq=40.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2924)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Many information professionals working in small units today fail to find the published tools for subject-based organization that are appropriate to their local needs, whether they are archivists, special librarians, information officers, or knowledge or content managers. Large established standards for document description and organization are too unwieldy, unnecessarily detailed, or too expensive to install and maintain. In other cases the available systems are insufficient for a specialist environment, or don't bring things together in a helpful way. A purpose built, in-house system would seem to be the answer, but too often the skills necessary to create one are lacking. This practical text examines the criteria relevant to the selection of a subject-management system, describes the characteristics of some common types of subject tool, and takes the novice step by step through the process of creating a system for a specialist environment. The methodology employed is a standard technique for the building of a thesaurus that incidentally creates a compatible classification or taxonomy, both of which may be used in a variety of ways for document or information management. Key areas covered are: What is a thesaurus? Tools for subject access and retrieval; what a thesaurus is used for? Why use a thesaurus? Examples of thesauri; the structure of a thesaurus; thesaural relationships; practical thesaurus construction; the vocabulary of the thesaurus; building the systematic structure; conversion to alphabetic format; forms of entry in the thesaurus; maintaining the thesaurus; thesaurus software; and; the wider environment. Essential for the practising information professional, this guide is also valuable for students of library and information science.
    Footnote
    In den stärker ins Detail gehenden Kapiteln weist Broughton zunächst auf die Bedeutung des systematischen Teils eines Thesaurus neben dem alphabetischen Teil hin und erläutert dann die Elemente des letzteren, wobei neben den gängigen Thesaurusrelationen auch die Option der Ausstattung der Einträge mit Notationen eines Klassifikationssystems erwähnt wird. Die Thesaurusrelationen selbst werden später noch in einem weiteren Kapitel ausführlicher diskutiert, wobei etwa auch die polyhierarchische Beziehung thematisiert wird. Zwei Kapitel zur Vokabularkontrolle führen in Aspekte wie Behandlung von Synonymen, Vermeidung von Mehrdeutigkeit, Wahl der bevorzugten Terme sowie die Formen von Thesauruseinträgen ein (grammatische Form, Schreibweise, Zeichenvorrat, Singular/Plural, Komposita bzw. deren Zerlegung usw.). Insgesamt acht Kapitel - in der Abfolge mit den bisher erwähnten Abschnitten didaktisch geschickt vermischt - stehen unter dem Motto "Building a thesaurus". Kurz zusammengefasst, geht es dabei um folgende Tätigkeiten und Prozesse: - Sammlung des Vokabulars unter Nutzung entsprechender Quellen; - Termextraktion aus den Titeln von Dokumenten und Probleme hiebei; - Analyse des Vokabulars (Facettenmethode); - Einbau einer internen Struktur (Facetten und Sub-Facetten, Anordnung der Terme); - Erstellung einer hierarchischen Struktur und deren Repräsentation; - Zusammengesetzte Themen bzw. Begriffe (Facettenanordnung: filing order vs. citation order); - Konvertierung der taxonomischen Anordnung in ein alphabetisches Format (Auswahl der Vorzugsbegriffe, Identifizieren hierarchischer Beziehungen, verwandter Begriffe usw.); - Erzeugen der endgültigen Thesaurus-Einträge.
    Diese Abschnitte sind verständlich geschrieben und trotz der mitunter gar nicht so einfachen Thematik auch für Einsteiger geeignet. Vorteilhaft ist sicherlich, dass die Autorin die Thesauruserstellung konsequent anhand eines einzelnen thematischen Beispiels demonstriert und dafür das Gebiet "animal welfare" gewählt hat, wohl nicht zuletzt auch deshalb, da die hier auftretenden Facetten und Beziehungen ohne allzu tiefgreifende fachwissenschaftliche Kenntnisse für die meisten Leser nachvollziehbar sind. Das methodische Gerüst der Facettenanalyse wird hier deutlich stärker betont als etwa in der (spärlichen) deutschsprachigen Thesaurusliteratur. Diese Vorgangsweise soll neben der Ordnungsbildung auch dazu verhelfen, die Zahl der Deskriptoren überschaubar zu halten und weniger auf komplexe (präkombinierte) Deskriptoren als auf postkoordinierte Indexierung zu setzen. Dafür wird im übrigen das als Verfeinerung der bekannten Ranganathanschen PMEST-Formel geltende Schema der 13 "fundamental categories" der UK Classification Research Group (CRG) vorgeschlagen bzw. in dem Beispiel verwendet (Thing / Kind / Part / Property; Material / Process / Operation; Patient / Product / By-product / Agent; Space; Time). Als "minor criticism" sei erwähnt, dass Broughton in ihrem Demonstrationsbeispiel als Notation für die erarbeitete Ordnung eine m.E. schwer lesbare Buchstabenfolge verwendet, obwohl sie zugesteht (S. 165), dass ein Zifferncode vielfach als einfacher handhabbar empfunden wird.
    Weitere Rez. in: New Library World 108(2007) nos.3/4, S.190-191 (K.V. Trickey): "Vanda has provided a very useful work that will enable any reader who is prepared to follow her instruction to produce a thesaurus that will be a quality language-based subject access tool that will make the task of information retrieval easier and more effective. Once again I express my gratitude to Vanda for producing another excellent book." - Electronic Library 24(2006) no.6, S.866-867 (A.G. Smith): "Essential thesaurus construction is an ideal instructional text, with clear bullet point summaries at the ends of sections, and relevant and up to date references, putting thesauri in context with the general theory of information retrieval. But it will also be a valuable reference for any information professional developing or using a controlled vocabulary." - KO 33(2006) no.4, S.215-216 (M.P. Satija)
  4. Chu, H.: Information representation and retrieval in the digital age (2010) 0.01
    0.011989389 = product of:
      0.023978777 = sum of:
        0.021717783 = weight(_text_:da in 92) [ClassicSimilarity], result of:
          0.021717783 = score(doc=92,freq=2.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.10602563 = fieldWeight in 92, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.015625 = fieldNorm(doc=92)
        0.0022609944 = product of:
          0.004521989 = sum of:
            0.004521989 = weight(_text_:a in 92) [ClassicSimilarity], result of:
              0.004521989 = score(doc=92,freq=26.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.09186576 = fieldWeight in 92, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.015625 = fieldNorm(doc=92)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Footnote
    Rez. in: JASIST 56(2005) no.2, S.215-216 (A. Heath): "What is small, thoroughly organized, and easy to understand? Well, it's Heting Chu's latest book an information retrieval. A very welcome release, this small literary addition to the field (only 248 pages) contains a concise and weIl-organized discussion of every major topic in information retrieval. The often-complex field of information retrieval is presented from its origin in the early 1950s to the present day. The organization of this text is top-notch, thus making this an easy read for even the novice. Unlike other titles in this area, Chu's user-friendly style of writing is done an purpose to properly introduce newcomers to the field in a less intimidating way. As stated by the author in the Preface, the purpose of the book is to "present a systematic, thorough yet nontechnical view of the field by using plain language to explain complex subjects." Chu has definitely struck up the right combination of ingredients. In a field so broad and complex, a well-organized presentation of topics that don't trip an themselves is essential. The use of plain language where possible is also a good choice for this topic because it allows one to absorb topics that are, by nature, not as easy to grasp. For instance, Chapters 6 and 7, which cover retrieval approaches and techniques, an often painstaking topic for many students and teachers is deftly handled with the use of tables that can be used to compare and contrast the various models discussed. I particularly loved Chu's use of Koll's 2000 article from the Bulletin of the American Society for Information Science to explain subject searching at the beginning of Chapter 6, which discusses the differences between browsing and searching. The Koll article uses the task of finding a needle in a haystack as an analogy.
    Chu's intent with this book is clear throughout the entire text. With this presentation, she writes with the novice in mind or as she puls it in the Preface, "to anyone who is interested in learning about the field, particularly those who are new to it." After reading the text, I found that this book is also an appropriate reference book for those who are somewhat advanced in the field. I found the chapters an information retrieval models and techniques, metadata, and AI very informative in that they contain information that is often rather densely presented in other texts. Although, I must say, the metadata section in Chapter 3 is pretty basic and contains more questions about the area than information. . . . It is an excellent book to have in the classroom, an your bookshelf, etc. It reads very well and is written with the reader in mind. If you are in need of a more advanced or technical text an the subject, this is not the book for you. But, if you are looking for a comprehensive, manual that can be used as a "flip-through," then you are in luck."
    Leider gibt es in deutscher Sprache keinen vergleichbaren Titel. Das Information-Retrieval-Buch von Ferber (2003) ist eher mathematisch orientiert und dürfte Studienanfänger der Informationswissenschaft durch seine große Detailliertheit und der damit einhergehenden großen Anzahl von Formeln eher abschrecken. Es ist eher denjenigen empfohlen, die sich intensiver mit dem Thema beschäftigen möchten. Ähnlich verhält es sich mit dem von manchen gerne genutzten Skript von Fuhr. Das Buch von Gaus (2003) ist mittlerweile schon ein Klassiker, beschäftigt sich aber im wesentlichen mit der Wissensrepräsentation und bietet zudem wenig Aktuelles. So fehlen etwa die Themen Information Retrieval im Internet und Multimedia-Retrieval komplett. Auch die Materialsammlung von Poetzsch (2002) konzentriert sich auf IR in klassischen Datenbanken und strebt zudem auch keine systematische Darstellung des Gebiets an. Zu wünschen wäre also, dass das hier besprochene Buch auch hierzulande in der Lehre Verwendung finden würde, da es den Studierenden einen knappen, gut lesbaren Einblick in das Themengebiet gibt. Es sollte aufgrund der vorbildlichen Aufbereitung des Stoffs auch Vorbild für zukünftige Autoren von Lehrbüchern sein. Und letztlich würde sich der Rezensent eine deutsche Übersetzung dieses Bandes wünschen."
  5. Wikipedia : das Buch : aus der freien Enzyklopädie Wikipedia ; [mit der DVD-ROM Wikipedia 2005/2006] (2005) 0.01
    0.011302309 = product of:
      0.022604618 = sum of:
        0.021717783 = weight(_text_:da in 118) [ClassicSimilarity], result of:
          0.021717783 = score(doc=118,freq=2.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.10602563 = fieldWeight in 118, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.015625 = fieldNorm(doc=118)
        8.86835E-4 = product of:
          0.00177367 = sum of:
            0.00177367 = weight(_text_:a in 118) [ClassicSimilarity], result of:
              0.00177367 = score(doc=118,freq=4.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.036032718 = fieldWeight in 118, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.015625 = fieldNorm(doc=118)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Classification
    A 010
    Footnote
    »Wikipedia - Das Buch« schickt sich an, das zumindest durchwachsene Image verbessern zu wollen. Tatsächlich beschränkt es sich darauf, die Regeln der Konventionen der Mitarbeit, wie sie im Idealfall aussehen, in Form von zahlreichen Textauszügen aus der Online-Fassung zu dokumentieren, wobei dabei eine Diktion genutzt wird, wie sie der Reklame für Waschmittel entnommen sein könnte: »Wikipedia ist eine besondere Enzyklopädie« (S.9), »Wikipedia ist dynamisch« (S.10), »Wikipedia ist schnell« (S. ii), »Wikipedia ist transparent« (S. 12), »Wikipedia ist kollaborativ« (S.13), »Wikipedia macht Spaß« (S. 15), »Wikipedia ist frei« (S. 15). Von einem kritischen Grundansatz gegenüber der eigenen Arbeit keine Spur, aber das Werk versteht sich ganz offenbar auch als kostenpflichtige Werbebroschüre mit Gebrauchsanweisung dafür, wie man kostenlos mitarbeiten darf. Praktisch ist allerdings, dass die diversen Kommentare zu Regeln und Konventionen, Referenzen und Textgestaltung recht überschaubar auf rund 270 Seiten Text zusammengefasst sind. Im Großen und Ganzen aber fragt man sich, wer dieses Buch brauchen soll: es enthält nichts was nicht online zur Verfügung steht und nichts was für jemanden interessant ist, der kein leidenschaftlicher Wikipedianer ist. Zwar weist das Vorwort darauf hin, dass diese Informationen hier zum ersten Mal gedruckt erscheinen und damit bequem bei der Arbeit mit dem Rechner zum Nachschlagen dienen können, aber der Text liegt zugleich nochmals auf einer DVD-ROM bei. Worin der Sinn besteht, auf einem Datenträger einen Text beizufügen, den man mittels moderner Fenstertechnik auch jederzeit und bequem online lesen kann, wissen nur die Götter. Allerdings ist dieses Buch in Zusammenhang mit einer umfassenden Reklamemaschine zu sehen, die jener für den Schlagersänger Robbie Williams ähnelt: es vergeht kaum ein Tag an dem die Wikipedia nicht in den Medien erwähnt wird. Die Wikipedia wird zudem ganz offensichtlich für vollkommen sachfremde Zwecke missbraucht: Die Bekanntheit des Projekts sorgt dafür, dass Links in ihr den Pagerank in Suchmaschinen steigen lassen kann und dass persönliche Vorlieben, Idole und verquaste Ideologien weltweit sichtbar gemacht werden. Die Partnerschaft mit Suchmaschinen gehört zu den auffallendsten Marketingstrategien im Internet, da beide Seiten davon profitieren. Die Unsitte beispielsweise von Google, oder leider auch von Vivisimo, Links aus der Wikipedia automatisch hoch zu platzieren schafft eine ScheinSeriosität, die nur denjenigen auffällt, die über genügend Sachkenntnis verfügen, auch die auswertenden Suchmaschinen kritisch zu hinterfragen. Die Wikipedia dient nicht zuletzt Google, Yahoo und anderen dazu, sich von dem Image einer Käuflichkeit der Treffer zu reinigen und führt zudem bei vielen exotischeren Themen zu Links die auf den ersten Blick mehr Qualität versprechen als das berüchtigte: »Ersteigern oder sofort kaufen!« mit dem Google bei jedem beliebigen Thema auf Ebay verweist, den dritten Moloch des Internet-Mainstreams.
    KAB
    A 010
  6. Klems, M.: Finden, was man sucht! : Strategien und Werkzeuge für die Internet-Recherche (2003) 0.01
    0.0108588915 = product of:
      0.043435566 = sum of:
        0.043435566 = weight(_text_:da in 1719) [ClassicSimilarity], result of:
          0.043435566 = score(doc=1719,freq=2.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.21205126 = fieldWeight in 1719, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.03125 = fieldNorm(doc=1719)
      0.25 = coord(1/4)
    
    Footnote
    Rez. in: FR Nr.165 vom 18.7.2003, S.14 (T.P. Gangloff) "Suchmaschinen sind unverzichtbare Helferinnen für die Internet-Recherche Doch wenn die Trefferliste zu viele Links anbietet, wird die Suche schon mal zur schlafraubenden Odyssee. Wer angesichts umfangreicher Trefferlisten verzweifelt, für den ist die Broschüre Finden, was man sucht! von Michael Klems das Richtige. Klems klärt zunächst über Grundsätzliches auf, weist darauf hin, dass die Recherchehilfen bloß Maschinen seien, man ihre oft an Interessen gekoppelten Informationen nicht ungeprüft verwenden solle und ohnehin das Internet nie die einzige Quelle sein dürfe. Interessant sind die konkreten Tipps - etwa zur effizienten Browsernutzung (ein Suchergebnis mit der rechten Maustaste in einem neuen Fenster öffnen; so behält man die Fundliste) oder zu Aufbau und Organisation eines Adressenverzeichnisses. Richtig spannend wird die Broschüre, wenn Klems endlich ins Internet geht. Er erklärt, wie die richtigen Suchbegriffe die Trefferquote erhöhen: Da sich nicht alle Maschinen am Wortstamm orientierten, empfehle es sich, Begriffe sowohl im Singular als auch im Plural einzugeben; außerdem plädiert Klems grundsätzlich für Kleinschreibung. Auch wie Begriffe verknüpft werden, lernt man. Viele Nutzer verlassen sich beim Recherchieren auf Google - und übersehen, dass Webkataloge oder spezielle Suchdienste nützlicher sein können. Klems beschreibt, wann welche Dienste sinnvoll sind: Mit einer Suchmaschine ist man immer auf dem neuesten Stand, während ein Katalog wie Web.de bei der Suche nach bewerteter Information hilft. Mets-Suchmaschinen wie Metager.de sind der Joker - und nur sinnvoll bei Begriffen mit potenziell niedriger Trefferquote. Ebenfalls viel versprechende Anlaufpunkte können die Diskussionsforen des Usenet sein, erreichbar über die Groups-Abfrage bei Google. Wertvoll sind die Tipps für die Literaturrecherche. Eine mehrseitige Linksammlung rundet die Broschüre ab"
  7. Thissen, F.: Screen-Design-Manual : Communicating Effectively Through Multimedia (2003) 0.01
    0.008587609 = product of:
      0.034350436 = sum of:
        0.034350436 = sum of:
          0.0054307333 = weight(_text_:a in 1397) [ClassicSimilarity], result of:
            0.0054307333 = score(doc=1397,freq=6.0), product of:
              0.049223874 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.04269026 = queryNorm
              0.11032722 = fieldWeight in 1397, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1397)
          0.028919702 = weight(_text_:22 in 1397) [ClassicSimilarity], result of:
            0.028919702 = score(doc=1397,freq=2.0), product of:
              0.149494 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04269026 = queryNorm
              0.19345059 = fieldWeight in 1397, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1397)
      0.25 = coord(1/4)
    
    Abstract
    The "Screen Design Manual" provides designers of interactive media with a practical working guide for preparing and presenting information that is suitable for both their target groups and the media they are using. It describes background information and relationships, clarifies them with the help of examples, and encourages further development of the language of digital media. In addition to the basics of the psychology of perception and learning, ergonomics, communication theory, imagery research, and aesthetics, the book also explores the design of navigation and orientation elements. Guidelines and checklists, along with the unique presentation of the book, support the application of information in practice.
    Classification
    ST 253 Informatik / Monographien / Software und -entwicklung / Web-Programmierwerkzeuge (A-Z)
    Date
    22. 3.2008 14:29:25
    RVK
    ST 253 Informatik / Monographien / Software und -entwicklung / Web-Programmierwerkzeuge (A-Z)
  8. Poetzsch, E.: Information Retrieval : Einführung in Grundlagen und Methoden (2006) 0.01
    0.008144168 = product of:
      0.032576673 = sum of:
        0.032576673 = weight(_text_:da in 592) [ClassicSimilarity], result of:
          0.032576673 = score(doc=592,freq=2.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.15903844 = fieldWeight in 592, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.0234375 = fieldNorm(doc=592)
      0.25 = coord(1/4)
    
    Footnote
    Das dritte Kapitel, "Fachbezogenes Information Retrieval", beschreibt die Retrievalmöglichkeiten der Hosts Dialog und STN International anhand der Retrievalsprachen Dialog und Messenger sowie der Weboberflächen der beiden Anbieter. Thematisch orientiert sich dieses Kapitel an der Wirtschaftsinformation bzw. naturwissenschaftlich-technischen Information. Ein Verzeichnis mit weiterführenden Monographien, eine Auflistung der elektronischen Referenzen und ein Register beschließen den Band. Um das umfassende Thema IR in ein überschaubares Lehrbuchau packen, müssen zwangsläufig Abstriche und Schwerpunktsetzungen vorgenommen werden; die Autorin hat in Abstimmung mit ihrer Lehrveranstaltung, wozu dieses Buch die Lernunterlage bildet, diesen auf lizenzpflichtige Online-Datenbanken gelegt. Allerdings kann diese Einschränkung den Eindruck erwecken, seriöse Recherche sei ausschließlich auf kostenpflichtige Angebote angewiesen; das immer wichtiger und umfangreicher werdende Angebot an wissenschaftlichen-und qualitätskontrollierten kostenlosen' oder gar Open Access-Datenbankeng sollte in einem Einführungsband zumindest erwähnt werden. Eine Abklärung, ob für die Befriedigung eines Informationsbedarfes überhaupt kostenpflichtige Abfragen notig sind, sollte explizit Bestandteil jeder Recherchevorbereitung (Kap. 1.3.) sein. Es wäre fürspätere Auflagen auch zu überlegen, ob nicht etwa boolesche und Näheoperatoren, Phrasensuche, Trunkierung, Klammerung und Feldsuche allgemein und abstrakt im ersten Kapitel besprochen werden sollten. Diese Suchtechniken werden jetzt im 2. und 3. Kapitel nur anhand der ausgewählten Retrievalsprachen: abgehandelt. Andernfalls könnte da<_ erste Kapitel als eigenständige, knappe Leseempfehlung und Lernunterlage zur Einführung in die Datenbankrecherche in der grundständigen Lehre verwendet werden, selbst wenn die Retrievalmöglichkeiten der spezifischen Hosts nicht Unterrichtsthema sind. Etwas schwerer als diese inhaltlichen Anmerkungen wiegt der Vorwurf an die optische Gestaltung des Textes. Uneinheitliche Schriftgrößen, eine Überladung mit Hervorhebungen (Kursivsetzungen, Fettdrucke, Unterstreichungen, mitunter in Kombination) sowie die generelle Bevorzugung der Aufzählung gegenüber dem Fließtext führen zu einem eher unruhigen Erscheinungsbild, was die Auseinandersetzung mit der Thematik und das Zurechtfinden im Buch wohl ein wenig erschwert. Fazit: trotz der angeführten Kritikpunkte handelt es sich hier um einen, empfehlenswerten Einstieg für den Umgang mit Recherchedatenbanken - insbesondere für jene Leserinnen, die an einer explizit praxisorientierten Einführung zum Kommandoretrieval für die angesprochenen Hosts interessiert sind."
  9. Multimedia content and the Semantic Web : methods, standards, and tools (2005) 0.01
    0.0076189833 = product of:
      0.030475933 = sum of:
        0.030475933 = sum of:
          0.0054307333 = weight(_text_:a in 150) [ClassicSimilarity], result of:
            0.0054307333 = score(doc=150,freq=24.0), product of:
              0.049223874 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.04269026 = queryNorm
              0.11032722 = fieldWeight in 150, product of:
                4.8989797 = tf(freq=24.0), with freq of:
                  24.0 = termFreq=24.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.01953125 = fieldNorm(doc=150)
          0.0250452 = weight(_text_:22 in 150) [ClassicSimilarity], result of:
            0.0250452 = score(doc=150,freq=6.0), product of:
              0.149494 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04269026 = queryNorm
              0.16753313 = fieldWeight in 150, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.01953125 = fieldNorm(doc=150)
      0.25 = coord(1/4)
    
    Classification
    006.7 22
    Date
    7. 3.2007 19:30:22
    DDC
    006.7 22
    Footnote
    Rez. in: JASIST 58(2007) no.3, S.457-458 (A.M.A. Ahmad): "The concept of the semantic web has emerged because search engines and text-based searching are no longer adequate, as these approaches involve an extensive information retrieval process. The deployed searching and retrieving descriptors arc naturally subjective and their deployment is often restricted to the specific application domain for which the descriptors were configured. The new era of information technology imposes different kinds of requirements and challenges. Automatic extracted audiovisual features are required, as these features are more objective, domain-independent, and more native to audiovisual content. This book is a useful guide for researchers, experts, students, and practitioners; it is a very valuable reference and can lead them through their exploration and research in multimedia content and the semantic web. The book is well organized, and introduces the concept of the semantic web and multimedia content analysis to the reader through a logical sequence from standards and hypotheses through system examples, presenting relevant tools and methods. But in some chapters readers will need a good technical background to understand some of the details. Readers may attain sufficient knowledge here to start projects or research related to the book's theme; recent results and articles related to the active research area of integrating multimedia with semantic web technologies are included. This book includes full descriptions of approaches to specific problem domains such as content search, indexing, and retrieval. This book will be very useful to researchers in the multimedia content analysis field who wish to explore the benefits of emerging semantic web technologies in applying multimedia content approaches. The first part of the book covers the definition of the two basic terms multimedia content and semantic web. The Moving Picture Experts Group standards MPEG7 and MPEG21 are quoted extensively. In addition, the means of multimedia content description are elaborated upon and schematically drawn. This extensive description is introduced by authors who are actively involved in those standards and have been participating in the work of the International Organization for Standardization (ISO)/MPEG for many years. On the other hand, this results in bias against the ad hoc or nonstandard tools for multimedia description in favor of the standard approaches. This is a general book for multimedia content; more emphasis on the general multimedia description and extraction could be provided.
    Semantic web technologies are explained, and ontology representation is emphasized. There is an excellent summary of the fundamental theory behind applying a knowledge-engineering approach to vision problems. This summary represents the concept of the semantic web and multimedia content analysis. A definition of the fuzzy knowledge representation that can be used for realization in multimedia content applications has been provided, with a comprehensive analysis. The second part of the book introduces the multimedia content analysis approaches and applications. In addition, some examples of methods applicable to multimedia content analysis are presented. Multimedia content analysis is a very diverse field and concerns many other research fields at the same time; this creates strong diversity issues, as everything from low-level features (e.g., colors, DCT coefficients, motion vectors, etc.) up to the very high and semantic level (e.g., Object, Events, Tracks, etc.) are involved. The second part includes topics on structure identification (e.g., shot detection for video sequences), and object-based video indexing. These conventional analysis methods are supplemented by results on semantic multimedia analysis, including three detailed chapters on the development and use of knowledge models for automatic multimedia analysis. Starting from object-based indexing and continuing with machine learning, these three chapters are very logically organized. Because of the diversity of this research field, including several chapters of recent research results is not sufficient to cover the state of the art of multimedia. The editors of the book should write an introductory chapter about multimedia content analysis approaches, basic problems, and technical issues and challenges, and try to survey the state of the art of the field and thus introduce the field to the reader.
    The final part of the book discusses research in multimedia content management systems and the semantic web, and presents examples and applications for semantic multimedia analysis in search and retrieval systems. These chapters describe example systems in which current projects have been implemented, and include extensive results and real demonstrations. For example, real case scenarios such as ECommerce medical applications and Web services have been introduced. Topics in natural language, speech and image processing techniques and their application for multimedia indexing, and content-based retrieval have been elaborated upon with extensive examples and deployment methods. The editors of the book themselves provide the readers with a chapter about their latest research results on knowledge-based multimedia content indexing and retrieval. Some interesting applications for multimedia content and the semantic web are introduced. Applications that have taken advantage of the metadata provided by MPEG7 in order to realize advance-access services for multimedia content have been provided. The applications discussed in the third part of the book provide useful guidance to researchers and practitioners properly planning to implement semantic multimedia analysis techniques in new research and development projects in both academia and industry. A fourth part should be added to this book: performance measurements for integrated approaches of multimedia analysis and the semantic web. Performance of the semantic approach is a very sophisticated issue and requires extensive elaboration and effort. Measuring the semantic search is an ongoing research area; several chapters concerning performance measurement and analysis would be required to adequately cover this area and introduce it to readers."
  10. Research and advanced technology for digital libraries : 7th European conference, ECDL2003 Trondheim, Norway, August 17-22, 2003. Proceedings (2003) 0.01
    0.00718615 = product of:
      0.0287446 = sum of:
        0.0287446 = sum of:
          0.005608837 = weight(_text_:a in 2426) [ClassicSimilarity], result of:
            0.005608837 = score(doc=2426,freq=10.0), product of:
              0.049223874 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.04269026 = queryNorm
              0.11394546 = fieldWeight in 2426, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03125 = fieldNorm(doc=2426)
          0.023135763 = weight(_text_:22 in 2426) [ClassicSimilarity], result of:
            0.023135763 = score(doc=2426,freq=2.0), product of:
              0.149494 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04269026 = queryNorm
              0.15476047 = fieldWeight in 2426, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2426)
      0.25 = coord(1/4)
    
    Content
    Inhalt: Uses, Users, and User Interaction Metadata Applications - Semantic Browsing / Alexander Faaborg, Carl Lagoze Annotation and Recommendation Automatic Classification and Indexing - Cross-Lingual Text Categorization / Nuria Bel, Cornelis H.A. Koster, Marta Villegas - Automatic Multi-label Subject Indexing in a Multilingual Environment / Boris Lauser, Andreas Hotho Web Technologies Topical Crawling, Subject Gateways - VASCODA: A German Scientific Portal for Cross-Searching Distributed Digital Resource Collections / Heike Neuroth, Tamara Pianos Architectures and Systems Knowledge Organization: Concepts - The ADEPT Concept-Based Digital Learning Environment / T.R. Smith, D. Ancona, O. Buchel, M. Freeston, W. Heller, R. Nottrott, T. Tierney, A. Ushakov - A User Evaluation of Hierarchical Phrase Browsing / Katrina D. Edgar, David M. Nichols, Gordon W. Paynter, Kirsten Thomson, Ian H. Witten - Visual Semantic Modeling of Digital Libraries / Qinwei Zhu, Marcos Andre Gongalves, Rao Shen, Lillian Cassell, Edward A. Fox Collection Building and Management Knowledge Organization: Authorities and Works - Automatic Conversion from MARC to FRBR / Christian Monch, Trond Aalberg Information Retrieval in Different Application Areas Digital Preservation Indexing and Searching of Special Document and Collection Information
  11. Research and advanced technology for digital libraries : 10th European conference ; proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006 ; proceedings (2006) 0.01
    0.0070381146 = product of:
      0.028152458 = sum of:
        0.028152458 = sum of:
          0.0050166957 = weight(_text_:a in 2428) [ClassicSimilarity], result of:
            0.0050166957 = score(doc=2428,freq=8.0), product of:
              0.049223874 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.04269026 = queryNorm
              0.10191591 = fieldWeight in 2428, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03125 = fieldNorm(doc=2428)
          0.023135763 = weight(_text_:22 in 2428) [ClassicSimilarity], result of:
            0.023135763 = score(doc=2428,freq=2.0), product of:
              0.149494 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04269026 = queryNorm
              0.15476047 = fieldWeight in 2428, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2428)
      0.25 = coord(1/4)
    
    Abstract
    This book constitutes the refereed proceedings of the 10th European Conference on Research and Advanced Technology for Digital Libraries, ECDL 2006, held in Alicante, Spain in September 2006. The 36 revised full papers presented together with the extended abstracts of 18 demo papers and 15 revised poster papers were carefully reviewed and selected from a total of 159 submissions. The papers are organized in topical sections on architectures, preservation, retrieval, applications, methodology, metadata, evaluation, user studies, modeling, audiovisual content, and language technologies.
    Content
    Inhalt u.a.: Architectures I Preservation Retrieval - The Use of Summaries in XML Retrieval / Zoltdn Szldvik, Anastasios Tombros, Mounia Laimas - An Enhanced Search Interface for Information Discovery from Digital Libraries / Georgia Koutrika, Alkis Simitsis - The TIP/Greenstone Bridge: A Service for Mobile Location-Based Access to Digital Libraries / Annika Hinze, Xin Gao, David Bainbridge Architectures II Applications Methodology Metadata Evaluation User Studies Modeling Audiovisual Content Language Technologies - Incorporating Cross-Document Relationships Between Sentences for Single Document Summarizations / Xiaojun Wan, Jianwu Yang, Jianguo Xiao - Semantic Web Techniques for Multiple Views on Heterogeneous Collections: A Case Study / Marjolein van Gendt, Antoine Isaac, Lourens van der Meij, Stefan Schlobach Posters - A Tool for Converting from MARC to FRBR / Trond Aalberg, Frank Berg Haugen, Ole Husby
  12. Information visualization in data mining and knowledge discovery (2002) 0.00
    0.004490735 = product of:
      0.01796294 = sum of:
        0.01796294 = sum of:
          0.0063950582 = weight(_text_:a in 1789) [ClassicSimilarity], result of:
            0.0063950582 = score(doc=1789,freq=52.0), product of:
              0.049223874 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.04269026 = queryNorm
              0.12991782 = fieldWeight in 1789, product of:
                7.2111025 = tf(freq=52.0), with freq of:
                  52.0 = termFreq=52.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.015625 = fieldNorm(doc=1789)
          0.011567881 = weight(_text_:22 in 1789) [ClassicSimilarity], result of:
            0.011567881 = score(doc=1789,freq=2.0), product of:
              0.149494 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04269026 = queryNorm
              0.07738023 = fieldWeight in 1789, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.015625 = fieldNorm(doc=1789)
      0.25 = coord(1/4)
    
    Date
    23. 3.2008 19:10:22
    Footnote
    Rez. in: JASIST 54(2003) no.9, S.905-906 (C.A. Badurek): "Visual approaches for knowledge discovery in very large databases are a prime research need for information scientists focused an extracting meaningful information from the ever growing stores of data from a variety of domains, including business, the geosciences, and satellite and medical imagery. This work presents a summary of research efforts in the fields of data mining, knowledge discovery, and data visualization with the goal of aiding the integration of research approaches and techniques from these major fields. The editors, leading computer scientists from academia and industry, present a collection of 32 papers from contributors who are incorporating visualization and data mining techniques through academic research as well application development in industry and government agencies. Information Visualization focuses upon techniques to enhance the natural abilities of humans to visually understand data, in particular, large-scale data sets. It is primarily concerned with developing interactive graphical representations to enable users to more intuitively make sense of multidimensional data as part of the data exploration process. It includes research from computer science, psychology, human-computer interaction, statistics, and information science. Knowledge Discovery in Databases (KDD) most often refers to the process of mining databases for previously unknown patterns and trends in data. Data mining refers to the particular computational methods or algorithms used in this process. The data mining research field is most related to computational advances in database theory, artificial intelligence and machine learning. This work compiles research summaries from these main research areas in order to provide "a reference work containing the collection of thoughts and ideas of noted researchers from the fields of data mining and data visualization" (p. 8). It addresses these areas in three main sections: the first an data visualization, the second an KDD and model visualization, and the last an using visualization in the knowledge discovery process. The seven chapters of Part One focus upon methodologies and successful techniques from the field of Data Visualization. Hoffman and Grinstein (Chapter 2) give a particularly good overview of the field of data visualization and its potential application to data mining. An introduction to the terminology of data visualization, relation to perceptual and cognitive science, and discussion of the major visualization display techniques are presented. Discussion and illustration explain the usefulness and proper context of such data visualization techniques as scatter plots, 2D and 3D isosurfaces, glyphs, parallel coordinates, and radial coordinate visualizations. Remaining chapters present the need for standardization of visualization methods, discussion of user requirements in the development of tools, and examples of using information visualization in addressing research problems.
    In 13 chapters, Part Two provides an introduction to KDD, an overview of data mining techniques, and examples of the usefulness of data model visualizations. The importance of visualization throughout the KDD process is stressed in many of the chapters. In particular, the need for measures of visualization effectiveness, benchmarking for identifying best practices, and the use of standardized sample data sets is convincingly presented. Many of the important data mining approaches are discussed in this complementary context. Cluster and outlier detection, classification techniques, and rule discovery algorithms are presented as the basic techniques common to the KDD process. The potential effectiveness of using visualization in the data modeling process are illustrated in chapters focused an using visualization for helping users understand the KDD process, ask questions and form hypotheses about their data, and evaluate the accuracy and veracity of their results. The 11 chapters of Part Three provide an overview of the KDD process and successful approaches to integrating KDD, data mining, and visualization in complementary domains. Rhodes (Chapter 21) begins this section with an excellent overview of the relation between the KDD process and data mining techniques. He states that the "primary goals of data mining are to describe the existing data and to predict the behavior or characteristics of future data of the same type" (p. 281). These goals are met by data mining tasks such as classification, regression, clustering, summarization, dependency modeling, and change or deviation detection. Subsequent chapters demonstrate how visualization can aid users in the interactive process of knowledge discovery by graphically representing the results from these iterative tasks. Finally, examples of the usefulness of integrating visualization and data mining tools in the domain of business, imagery and text mining, and massive data sets are provided. This text concludes with a thorough and useful 17-page index and lengthy yet integrating 17-page summary of the academic and industrial backgrounds of the contributing authors. A 16-page set of color inserts provide a better representation of the visualizations discussed, and a URL provided suggests that readers may view all the book's figures in color on-line, although as of this submission date it only provides access to a summary of the book and its contents. The overall contribution of this work is its focus an bridging two distinct areas of research, making it a valuable addition to the Morgan Kaufmann Series in Database Management Systems. The editors of this text have met their main goal of providing the first textbook integrating knowledge discovery, data mining, and visualization. Although it contributes greatly to our under- standing of the development and current state of the field, a major weakness of this text is that there is no concluding chapter to discuss the contributions of the sum of these contributed papers or give direction to possible future areas of research. "Integration of expertise between two different disciplines is a difficult process of communication and reeducation. Integrating data mining and visualization is particularly complex because each of these fields in itself must draw an a wide range of research experience" (p. 300). Although this work contributes to the crossdisciplinary communication needed to advance visualization in KDD, a more formal call for an interdisciplinary research agenda in a concluding chapter would have provided a more satisfying conclusion to a very good introductory text.
    With contributors almost exclusively from the computer science field, the intended audience of this work is heavily slanted towards a computer science perspective. However, it is highly readable and provides introductory material that would be useful to information scientists from a variety of domains. Yet, much interesting work in information visualization from other fields could have been included giving the work more of an interdisciplinary perspective to complement their goals of integrating work in this area. Unfortunately, many of the application chapters are these, shallow, and lack complementary illustrations of visualization techniques or user interfaces used. However, they do provide insight into the many applications being developed in this rapidly expanding field. The authors have successfully put together a highly useful reference text for the data mining and information visualization communities. Those interested in a good introduction and overview of complementary research areas in these fields will be satisfied with this collection of papers. The focus upon integrating data visualization with data mining complements texts in each of these fields, such as Advances in Knowledge Discovery and Data Mining (Fayyad et al., MIT Press) and Readings in Information Visualization: Using Vision to Think (Card et. al., Morgan Kauffman). This unique work is a good starting point for future interaction between researchers in the fields of data visualization and data mining and makes a good accompaniment for a course focused an integrating these areas or to the main reference texts in these fields."
  13. Thissen, F.: Screen-Design-Handbuch : Effektiv informieren und kommunizieren mit Multimedia (2001) 0.00
    0.0043379553 = product of:
      0.017351821 = sum of:
        0.017351821 = product of:
          0.034703642 = sum of:
            0.034703642 = weight(_text_:22 in 1781) [ClassicSimilarity], result of:
              0.034703642 = score(doc=1781,freq=2.0), product of:
                0.149494 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04269026 = queryNorm
                0.23214069 = fieldWeight in 1781, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1781)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 3.2008 14:35:21
  14. Bleuel, J.: Online Publizieren im Internet : elektronische Zeitschriften und Bücher (1995) 0.00
    0.0036149628 = product of:
      0.014459851 = sum of:
        0.014459851 = product of:
          0.028919702 = sum of:
            0.028919702 = weight(_text_:22 in 1708) [ClassicSimilarity], result of:
              0.028919702 = score(doc=1708,freq=2.0), product of:
                0.149494 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04269026 = queryNorm
                0.19345059 = fieldWeight in 1708, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1708)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 3.2008 16:15:37
  15. Medienkompetenz : wie lehrt und lernt man Medienkompetenz? (2003) 0.00
    0.0028919703 = product of:
      0.011567881 = sum of:
        0.011567881 = product of:
          0.023135763 = sum of:
            0.023135763 = weight(_text_:22 in 2249) [ClassicSimilarity], result of:
              0.023135763 = score(doc=2249,freq=2.0), product of:
                0.149494 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04269026 = queryNorm
                0.15476047 = fieldWeight in 2249, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2249)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 3.2008 18:05:16
  16. Lavrenko, V.: ¬A generative theory of relevance (2009) 0.00
    0.0011757881 = product of:
      0.0047031525 = sum of:
        0.0047031525 = product of:
          0.009406305 = sum of:
            0.009406305 = weight(_text_:a in 3306) [ClassicSimilarity], result of:
              0.009406305 = score(doc=3306,freq=18.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.19109234 = fieldWeight in 3306, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3306)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    A modern information retrieval system must have the capability to find, organize and present very different manifestations of information - such as text, pictures, videos or database records - any of which may be of relevance to the user. However, the concept of relevance, while seemingly intuitive, is actually hard to define, and it's even harder to model in a formal way. Lavrenko does not attempt to bring forth a new definition of relevance, nor provide arguments as to why any particular definition might be theoretically superior or more complete. Instead, he takes a widely accepted, albeit somewhat conservative definition, makes several assumptions, and from them develops a new probabilistic model that explicitly captures that notion of relevance. With this book, he makes two major contributions to the field of information retrieval: first, a new way to look at topical relevance, complementing the two dominant models, i.e., the classical probabilistic model and the language modeling approach, and which explicitly combines documents, queries, and relevance in a single formalism; second, a new method for modeling exchangeable sequences of discrete random variables which does not make any structural assumptions about the data and which can also handle rare events. Thus his book is of major interest to researchers and graduate students in information retrieval who specialize in relevance modeling, ranking algorithms, and language modeling.
  17. Belew, R.K.: Finding out about : a cognitive perspective on search engine technology and the WWW (2001) 0.00
    0.0010399062 = product of:
      0.0041596247 = sum of:
        0.0041596247 = product of:
          0.008319249 = sum of:
            0.008319249 = weight(_text_:a in 3346) [ClassicSimilarity], result of:
              0.008319249 = score(doc=3346,freq=22.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.16900843 = fieldWeight in 3346, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3346)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The World Wide Web is rapidly filling with more text than anyone could have imagined even a short time ago, but the task of isolating relevant parts of this vast information has become just that much more daunting. Richard Belew brings a cognitive perspective to the study of information retrieval as a discipline within computer science. He introduces the idea of Finding Out About (FDA) as the process of actively seeking out information relevant to a topic of interest and describes its many facets - ranging from creating a good characterization of what the user seeks, to what documents actually mean, to methods of inferring semantic clues about each document, to the problem of evaluating whether our search engines are performing as we have intended. Finding Out About explains how to build the tools that are useful for searching collections of text and other media. In the process it takes a close look at the properties of textual documents that do not become clear until very large collections of them are brought together and shows that the construction of effective search engines requires knowledge of the statistical and mathematical properties of linguistic phenomena, as well as an appreciation for the cognitive foundation we bring to the task as language users. The unique approach of this book is its even handling of the phenomena of both numbers and words, making it accessible to a wide audience. The textbook is usable in both undergraduate and graduate classes on information retrieval, library science, and computational linguistics. The text is accompanied by a CD-ROM that contains a hypertext version of the book, including additional topics and notes not present in the printed edition. In addition, the CD contains the full text of C.J. "Keith" van Rijsbergen's famous textbook, Information Retrieval (now out of print). Many active links from Belew's to van Rijsbergen's hypertexts help to unite the material. Several test corpora and indexing tools are provided, to support the design of your own search engine. Additional exercises using these corpora and code are available to instructors. Also supporting this book is a Web site that will include recent additions to the book, as well as links to sites of new topics and methods.
  18. Hare, C.E.; McLeod, J.: How to manage records in the e-environment : 2nd ed. (2006) 0.00
    9.503783E-4 = product of:
      0.0038015132 = sum of:
        0.0038015132 = product of:
          0.0076030265 = sum of:
            0.0076030265 = weight(_text_:a in 1749) [ClassicSimilarity], result of:
              0.0076030265 = score(doc=1749,freq=6.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.1544581 = fieldWeight in 1749, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1749)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    A practical approach to developing and operating an effective programme to manage hybrid records within an organization. This title positions records management as an integral business function linked to the organisation's business aims and objectives. The authors also address the records requirements of new and significant pieces of legislation, such as data protection and freedom of information, as well as exploring strategies for managing electronic records. Bullet points, checklists and examples assist the reader throughout, making this a one-stop resource for information in this area.
    Footnote
    1. Aufl. u.d.T.: Developing a records management programme
  19. Innovations in information retrieval : perspectives for theory and practice (2011) 0.00
    8.86835E-4 = product of:
      0.00354734 = sum of:
        0.00354734 = product of:
          0.00709468 = sum of:
            0.00709468 = weight(_text_:a in 1757) [ClassicSimilarity], result of:
              0.00709468 = score(doc=1757,freq=16.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.14413087 = fieldWeight in 1757, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1757)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The advent of new information retrieval (IR) technologies and approaches to storage and retrieval provide communities with previously unheard of opportunities for mass documentation, digitization, and the recording of information in all its forms. This book introduces and contextualizes these developments and looks at supporting research in IR, the debates, theories and issues. Contributed by an international team of experts, each authored chapter provides a snapshot of changes in the field, as well as the importance of developing innovation, creativity and thinking in IR practice and research. Key discussion areas include: browsing in new information environments classification revisited: a web of knowledge approaches to fiction retrieval research music information retrieval research folksonomies, social tagging and information retrieval digital information interaction as semantic navigation assessing web search machines: a webometric approach. The questions raised are of significance to the whole international library and information science community, and this is essential reading for LIS professionals , researchers and students, and for all those interested in the future of IR.
    Content
    Inhalt: Bawden, D.: Encountering on the road to serendip? Browsing in new information environments. - Slavic, A.: Classification revisited: a web of knowledge. - Vernitski, A. u. P. Rafferty: Approaches to fiction retrieval research, from theory to practice? - Inskip, C.: Music information retrieval research. - Peters, I.: Folksonomies, social tagging and information retrieval. - Kopak, R., L. Freund u. H. O'Brien: Digital information interaction as semantic navigation. - Thelwall, M.: Assessing web search engines: a webometric approach
    Editor
    Foster, A.
  20. Floridi, L.: Philosophy and computing : an introduction (1999) 0.00
    8.763808E-4 = product of:
      0.003505523 = sum of:
        0.003505523 = product of:
          0.007011046 = sum of:
            0.007011046 = weight(_text_:a in 823) [ClassicSimilarity], result of:
              0.007011046 = score(doc=823,freq=10.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.14243183 = fieldWeight in 823, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=823)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Philosophy and Computing explores each of the following areas of technology: the digital revolution; the computer; the Internet and the Web; CD-ROMs and Mulitmedia; databases, textbases, and hypertexts; Artificial Intelligence; the future of computing. Luciano Floridi shows us how the relationship between philosophy and computing provokes a wide range of philosophical questions: is there a philosophy of information? What can be achieved by a classic computer? How can we define complexity? What are the limits of quantam computers? Is the Internet an intellectual space or a polluted environment? What is the paradox in the Strong Artificial Intlligence program? Philosophy and Computing is essential reading for anyone wishing to fully understand both the development and history of information and communication technology as well as the philosophical issues it ultimately raises. 'The most careful and scholarly book to be written on castles in a generation.'

Languages

  • e 37
  • d 17

Types

  • m 54
  • s 20
  • d 1
  • el 1
  • i 1
  • More… Less…

Subjects

Classifications