Search (71 results, page 1 of 4)

  • × classification_ss:"06.74 / Informationssysteme"
  1. Research and advanced technology for digital libraries : 7th European conference, ECDL2003 Trondheim, Norway, August 17-22, 2003. Proceedings (2003) 0.03
    0.025625031 = product of:
      0.038437545 = sum of:
        0.00990422 = weight(_text_:h in 2426) [ClassicSimilarity], result of:
          0.00990422 = score(doc=2426,freq=2.0), product of:
            0.09020387 = queryWeight, product of:
              2.4844491 = idf(docFreq=10020, maxDocs=44218)
              0.03630739 = queryNorm
            0.10979818 = fieldWeight in 2426, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.4844491 = idf(docFreq=10020, maxDocs=44218)
              0.03125 = fieldNorm(doc=2426)
        0.01720423 = weight(_text_:u in 2426) [ClassicSimilarity], result of:
          0.01720423 = score(doc=2426,freq=2.0), product of:
            0.11888653 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03630739 = queryNorm
            0.14471136 = fieldWeight in 2426, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03125 = fieldNorm(doc=2426)
        0.0047702272 = weight(_text_:a in 2426) [ClassicSimilarity], result of:
          0.0047702272 = score(doc=2426,freq=10.0), product of:
            0.041864127 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03630739 = queryNorm
            0.11394546 = fieldWeight in 2426, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=2426)
        0.006558867 = product of:
          0.019676602 = sum of:
            0.019676602 = weight(_text_:22 in 2426) [ClassicSimilarity], result of:
              0.019676602 = score(doc=2426,freq=2.0), product of:
                0.1271423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03630739 = queryNorm
                0.15476047 = fieldWeight in 2426, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2426)
          0.33333334 = coord(1/3)
      0.6666667 = coord(4/6)
    
    Content
    Inhalt: Uses, Users, and User Interaction Metadata Applications - Semantic Browsing / Alexander Faaborg, Carl Lagoze Annotation and Recommendation Automatic Classification and Indexing - Cross-Lingual Text Categorization / Nuria Bel, Cornelis H.A. Koster, Marta Villegas - Automatic Multi-label Subject Indexing in a Multilingual Environment / Boris Lauser, Andreas Hotho Web Technologies Topical Crawling, Subject Gateways - VASCODA: A German Scientific Portal for Cross-Searching Distributed Digital Resource Collections / Heike Neuroth, Tamara Pianos Architectures and Systems Knowledge Organization: Concepts - The ADEPT Concept-Based Digital Learning Environment / T.R. Smith, D. Ancona, O. Buchel, M. Freeston, W. Heller, R. Nottrott, T. Tierney, A. Ushakov - A User Evaluation of Hierarchical Phrase Browsing / Katrina D. Edgar, David M. Nichols, Gordon W. Paynter, Kirsten Thomson, Ian H. Witten - Visual Semantic Modeling of Digital Libraries / Qinwei Zhu, Marcos Andre Gongalves, Rao Shen, Lillian Cassell, Edward A. Fox Collection Building and Management Knowledge Organization: Authorities and Works - Automatic Conversion from MARC to FRBR / Christian Monch, Trond Aalberg Information Retrieval in Different Application Areas Digital Preservation Indexing and Searching of Special Document and Collection Information
    Editor
    Koch, T. u. I. Torvik Solvberg
  2. Innovations in information retrieval : perspectives for theory and practice (2011) 0.02
    0.022185527 = product of:
      0.044371054 = sum of:
        0.014006683 = weight(_text_:h in 1757) [ClassicSimilarity], result of:
          0.014006683 = score(doc=1757,freq=4.0), product of:
            0.09020387 = queryWeight, product of:
              2.4844491 = idf(docFreq=10020, maxDocs=44218)
              0.03630739 = queryNorm
            0.15527807 = fieldWeight in 1757, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.4844491 = idf(docFreq=10020, maxDocs=44218)
              0.03125 = fieldNorm(doc=1757)
        0.024330458 = weight(_text_:u in 1757) [ClassicSimilarity], result of:
          0.024330458 = score(doc=1757,freq=4.0), product of:
            0.11888653 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03630739 = queryNorm
            0.20465277 = fieldWeight in 1757, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03125 = fieldNorm(doc=1757)
        0.006033913 = weight(_text_:a in 1757) [ClassicSimilarity], result of:
          0.006033913 = score(doc=1757,freq=16.0), product of:
            0.041864127 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03630739 = queryNorm
            0.14413087 = fieldWeight in 1757, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=1757)
      0.5 = coord(3/6)
    
    Abstract
    The advent of new information retrieval (IR) technologies and approaches to storage and retrieval provide communities with previously unheard of opportunities for mass documentation, digitization, and the recording of information in all its forms. This book introduces and contextualizes these developments and looks at supporting research in IR, the debates, theories and issues. Contributed by an international team of experts, each authored chapter provides a snapshot of changes in the field, as well as the importance of developing innovation, creativity and thinking in IR practice and research. Key discussion areas include: browsing in new information environments classification revisited: a web of knowledge approaches to fiction retrieval research music information retrieval research folksonomies, social tagging and information retrieval digital information interaction as semantic navigation assessing web search machines: a webometric approach. The questions raised are of significance to the whole international library and information science community, and this is essential reading for LIS professionals , researchers and students, and for all those interested in the future of IR.
    Content
    Inhalt: Bawden, D.: Encountering on the road to serendip? Browsing in new information environments. - Slavic, A.: Classification revisited: a web of knowledge. - Vernitski, A. u. P. Rafferty: Approaches to fiction retrieval research, from theory to practice? - Inskip, C.: Music information retrieval research. - Peters, I.: Folksonomies, social tagging and information retrieval. - Kopak, R., L. Freund u. H. O'Brien: Digital information interaction as semantic navigation. - Thelwall, M.: Assessing web search engines: a webometric approach
    Editor
    Foster, A.
    Footnote
    Rez. in: Mitt VÖB 64(2911) H.3/4, S.547-553 (O. Oberhauser): "Dieser mit 156 Seiten (inklusive Register) relativ schmale Band enthält sieben mit dem Gütesiegel "peer-reviewed" versehene Beiträge namhafter Autoren zu "research fronts" auf dem Gebiet des Information Retrieval (IR) - ein Begriff, der hier durchaus breit verstanden wird. Wie die Herausgeber Allen Foster und Pauline Rafferty - beide aus dem Department of Information Studies an der Aberystwyth University (Wales) - in ihrer Einleitung betonen, sind Theorie und Praxis der Wissensorganisation im Internet- Zeitalter nicht mehr nur die Domäne von Informationswissenschaftlern und Bibliotheksfachleuten, sondern auch von Informatikern, Semantic-Web-Entwicklern und Wissensmanagern aus den verschiedensten Institutionen; neben das wissenschaftliche Interesse am Objektbereich ist nun auch das kommerzielle getreten. Die Verarbeitung von Massendaten, die Beschäftigung mit komplexen Medien und die Erforschung der Möglichkeiten zur Einbeziehung der Rezipienten sind insbesondere die Aspekte, um die es heute geht. ..." Weitere Rez. in: Library review 61(2012) no.3, S.233-235 (G. Macgregor); J. Doc. 69(2013) no.2, S.320-321 (J. Bates)
  3. Chu, H.: Information representation and retrieval in the digital age (2010) 0.02
    0.015813265 = product of:
      0.04743979 = sum of:
        0.017332384 = weight(_text_:h in 377) [ClassicSimilarity], result of:
          0.017332384 = score(doc=377,freq=2.0), product of:
            0.09020387 = queryWeight, product of:
              2.4844491 = idf(docFreq=10020, maxDocs=44218)
              0.03630739 = queryNorm
            0.19214681 = fieldWeight in 377, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.4844491 = idf(docFreq=10020, maxDocs=44218)
              0.0546875 = fieldNorm(doc=377)
        0.030107405 = weight(_text_:u in 377) [ClassicSimilarity], result of:
          0.030107405 = score(doc=377,freq=2.0), product of:
            0.11888653 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03630739 = queryNorm
            0.25324488 = fieldWeight in 377, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0546875 = fieldNorm(doc=377)
      0.33333334 = coord(2/6)
    
    Theme
    Grundlagen u. Einführungen: Allgemeine Literatur
  4. nestor-Handbuch : eine kleine Enzyklopädie der digitalen Langzeitarchivierung; [im Rahmen des Projektes: Nestor - Kompetenznetzwerk Langzeitarchivierung und Langzeitverfügbarkeit digitaler Ressourcen für Deutschland] / Georg-August-Universität Göttingen. (2009) 0.01
    0.01462088 = product of:
      0.02924176 = sum of:
        0.00990422 = weight(_text_:h in 3715) [ClassicSimilarity], result of:
          0.00990422 = score(doc=3715,freq=2.0), product of:
            0.09020387 = queryWeight, product of:
              2.4844491 = idf(docFreq=10020, maxDocs=44218)
              0.03630739 = queryNorm
            0.10979818 = fieldWeight in 3715, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.4844491 = idf(docFreq=10020, maxDocs=44218)
              0.03125 = fieldNorm(doc=3715)
        0.01720423 = weight(_text_:u in 3715) [ClassicSimilarity], result of:
          0.01720423 = score(doc=3715,freq=2.0), product of:
            0.11888653 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03630739 = queryNorm
            0.14471136 = fieldWeight in 3715, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03125 = fieldNorm(doc=3715)
        0.0021333103 = weight(_text_:a in 3715) [ClassicSimilarity], result of:
          0.0021333103 = score(doc=3715,freq=2.0), product of:
            0.041864127 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03630739 = queryNorm
            0.050957955 = fieldWeight in 3715, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=3715)
      0.5 = coord(3/6)
    
    Editor
    Neuroth, H., A. Oßwald, R. Scheffel, S. Strathmann u. K. Huth
  5. Web-2.0-Dienste als Ergänzung zu algorithmischen Suchmaschinen (2008) 0.01
    0.013554226 = product of:
      0.040662676 = sum of:
        0.014856329 = weight(_text_:h in 4323) [ClassicSimilarity], result of:
          0.014856329 = score(doc=4323,freq=2.0), product of:
            0.09020387 = queryWeight, product of:
              2.4844491 = idf(docFreq=10020, maxDocs=44218)
              0.03630739 = queryNorm
            0.16469726 = fieldWeight in 4323, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.4844491 = idf(docFreq=10020, maxDocs=44218)
              0.046875 = fieldNorm(doc=4323)
        0.025806347 = weight(_text_:u in 4323) [ClassicSimilarity], result of:
          0.025806347 = score(doc=4323,freq=2.0), product of:
            0.11888653 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03630739 = queryNorm
            0.21706703 = fieldWeight in 4323, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.046875 = fieldNorm(doc=4323)
      0.33333334 = coord(2/6)
    
    Content
    Das Buchprojekt entstand im Rahmen des Forschungsprojektes Theseus (Teilprojekt Alexandria). - Als Online Publikation vgl.: http://www.bui.haw-hamburg.de/fileadmin/user_upload/lewandowski/Web20-Buch/lewandowski-maass.pdf. Rez. in: ZfBB 56(2009) H.2, S.134-135 (K. Lepsky)
    Editor
    Lewandowski, D. u. C. Maaß
  6. Thissen, F.: Screen-Design-Manual : Communicating Effectively Through Multimedia (2003) 0.01
    0.012520644 = product of:
      0.03756193 = sum of:
        0.0046187527 = weight(_text_:a in 1397) [ClassicSimilarity], result of:
          0.0046187527 = score(doc=1397,freq=6.0), product of:
            0.041864127 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03630739 = queryNorm
            0.11032722 = fieldWeight in 1397, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1397)
        0.032943178 = product of:
          0.049414765 = sum of:
            0.024819015 = weight(_text_:29 in 1397) [ClassicSimilarity], result of:
              0.024819015 = score(doc=1397,freq=2.0), product of:
                0.12771805 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03630739 = queryNorm
                0.19432661 = fieldWeight in 1397, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1397)
            0.02459575 = weight(_text_:22 in 1397) [ClassicSimilarity], result of:
              0.02459575 = score(doc=1397,freq=2.0), product of:
                0.1271423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03630739 = queryNorm
                0.19345059 = fieldWeight in 1397, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1397)
          0.6666667 = coord(2/3)
      0.33333334 = coord(2/6)
    
    Abstract
    The "Screen Design Manual" provides designers of interactive media with a practical working guide for preparing and presenting information that is suitable for both their target groups and the media they are using. It describes background information and relationships, clarifies them with the help of examples, and encourages further development of the language of digital media. In addition to the basics of the psychology of perception and learning, ergonomics, communication theory, imagery research, and aesthetics, the book also explores the design of navigation and orientation elements. Guidelines and checklists, along with the unique presentation of the book, support the application of information in practice.
    Classification
    ST 253 Informatik / Monographien / Software und -entwicklung / Web-Programmierwerkzeuge (A-Z)
    Date
    22. 3.2008 14:29:25
    RVK
    ST 253 Informatik / Monographien / Software und -entwicklung / Web-Programmierwerkzeuge (A-Z)
  7. TREC: experiment and evaluation in information retrieval (2005) 0.01
    0.012075194 = product of:
      0.024150388 = sum of:
        0.015206535 = weight(_text_:u in 636) [ClassicSimilarity], result of:
          0.015206535 = score(doc=636,freq=4.0), product of:
            0.11888653 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03630739 = queryNorm
            0.12790798 = fieldWeight in 636, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.01953125 = fieldNorm(doc=636)
        0.00480735 = weight(_text_:a in 636) [ClassicSimilarity], result of:
          0.00480735 = score(doc=636,freq=26.0), product of:
            0.041864127 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03630739 = queryNorm
            0.11483221 = fieldWeight in 636, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.01953125 = fieldNorm(doc=636)
        0.0041365027 = product of:
          0.012409507 = sum of:
            0.012409507 = weight(_text_:29 in 636) [ClassicSimilarity], result of:
              0.012409507 = score(doc=636,freq=2.0), product of:
                0.12771805 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03630739 = queryNorm
                0.097163305 = fieldWeight in 636, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=636)
          0.33333334 = coord(1/3)
      0.5 = coord(3/6)
    
    Abstract
    The Text REtrieval Conference (TREC), a yearly workshop hosted by the US government's National Institute of Standards and Technology, provides the infrastructure necessary for large-scale evaluation of text retrieval methodologies. With the goal of accelerating research in this area, TREC created the first large test collections of full-text documents and standardized retrieval evaluation. The impact has been significant; since TREC's beginning in 1992, retrieval effectiveness has approximately doubled. TREC has built a variety of large test collections, including collections for such specialized retrieval tasks as cross-language retrieval and retrieval of speech. Moreover, TREC has accelerated the transfer of research ideas into commercial systems, as demonstrated in the number of retrieval techniques developed in TREC that are now used in Web search engines. This book provides a comprehensive review of TREC research, summarizing the variety of TREC results, documenting the best practices in experimental information retrieval, and suggesting areas for further research. The first part of the book describes TREC's history, test collections, and retrieval methodology. Next, the book provides "track" reports -- describing the evaluations of specific tasks, including routing and filtering, interactive retrieval, and retrieving noisy text. The final part of the book offers perspectives on TREC from such participants as Microsoft Research, University of Massachusetts, Cornell University, University of Waterloo, City University of New York, and IBM. The book will be of interest to researchers in information retrieval and related technologies, including natural language processing.
    Content
    Enthält die Beiträge: 1. The Text REtrieval Conference - Ellen M. Voorhees and Donna K. Harman 2. The TREC Test Collections - Donna K. Harman 3. Retrieval System Evaluation - Chris Buckley and Ellen M. Voorhees 4. The TREC Ad Hoc Experiments - Donna K. Harman 5. Routing and Filtering - Stephen Robertson and Jamie Callan 6. The TREC Interactive Tracks: Putting the User into Search - Susan T. Dumais and Nicholas J. Belkin 7. Beyond English - Donna K. Harman 8. Retrieving Noisy Text - Ellen M. Voorhees and John S. Garofolo 9.The Very Large Collection and Web Tracks - David Hawking and Nick Craswell 10. Question Answering in TREC - Ellen M. Voorhees 11. The University of Massachusetts and a Dozen TRECs - James Allan, W. Bruce Croft and Jamie Callan 12. How Okapi Came to TREC - Stephen Robertson 13. The SMART Project at TREC - Chris Buckley 14. Ten Years of Ad Hoc Retrieval at TREC Using PIRCS - Kui-Lam Kwok 15. MultiText Experiments for TREC - Gordon V. Cormack, Charles L. A. Clarke, Christopher R. Palmer and Thomas R. Lynam 16. A Language-Modeling Approach to TREC - Djoerd Hiemstra and Wessel Kraaij 17. BM Research Activities at TREC - Eric W. Brown, David Carmel, Martin Franz, Abraham Ittycheriah, Tapas Kanungo, Yoelle Maarek, J. Scott McCarley, Robert L. Mack, John M. Prager, John R. Smith, Aya Soffer, Jason Y. Zien and Alan D. Marwick Epilogue: Metareflections on TREC - Karen Sparck Jones
    Date
    29. 3.1996 18:16:49
    Editor
    Voorhees, E.M. u. D.K. Harman
    Footnote
    Rez. in: JASIST 58(2007) no.6, S.910-911 (J.L. Vicedo u. J. Gomez): "The Text REtrieval Conference (TREC) is a yearly workshop hosted by the U.S. government's National Institute of Standards and Technology (NIST) that fosters and supports research in information retrieval as well as speeding the transfer of technology between research labs and industry. Since 1992, TREC has provided the infrastructure necessary for large-scale evaluations of different text retrieval methodologies. TREC impact has been very important and its success has been mainly supported by its continuous adaptation to the emerging information retrieval needs. Not in vain, TREC has built evaluation benchmarks for more than 20 different retrieval problems such as Web retrieval, speech retrieval, or question-answering. The large and intense trajectory of annual TREC conferences has resulted in an immense bulk of documents reflecting the different eval uation and research efforts developed. This situation makes it difficult sometimes to observe clearly how research in information retrieval (IR) has evolved over the course of TREC. TREC: Experiment and Evaluation in Information Retrieval succeeds in organizing and condensing all this research into a manageable volume that describes TREC history and summarizes the main lessons learned. The book is organized into three parts. The first part is devoted to the description of TREC's origin and history, the test collections, and the evaluation methodology developed. The second part describes a selection of the major evaluation exercises (tracks), and the third part contains contributions from research groups that had a large and remarkable participation in TREC. Finally, Karen Spark Jones, one of the main promoters of research in IR, closes the book with an epilogue that analyzes the impact of TREC on this research field.
    ... TREC: Experiment and Evaluation in Information Retrieval is a reliable and comprehensive review of the TREC program and has been adopted by NIST as the official history of TREC (see http://trec.nist.gov). We were favorably surprised by the book. Well structured and written, chapters are self-contained and the existence of references to specialized and more detailed publications is continuous, which makes it easier to expand into the different aspects analyzed in the text. This book succeeds in compiling TREC evolution from its inception in 1992 to 2003 in an adequate and manageable volume. Thanks to the impressive effort performed by the authors and their experience in the field, it can satiate the interests of a great variety of readers. While expert researchers in the IR field and IR-related industrial companies can use it as a reference manual, it seems especially useful for students and non-expert readers willing to approach this research area. Like NIST, we would recommend this reading to anyone who may be interested in textual information retrieval."
  8. Thissen, F.: Screen-Design-Handbuch : Effektiv informieren und kommunizieren mit Multimedia (2001) 0.01
    0.011881549 = product of:
      0.035644647 = sum of:
        0.025806347 = weight(_text_:u in 1781) [ClassicSimilarity], result of:
          0.025806347 = score(doc=1781,freq=2.0), product of:
            0.11888653 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03630739 = queryNorm
            0.21706703 = fieldWeight in 1781, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.046875 = fieldNorm(doc=1781)
        0.0098383 = product of:
          0.0295149 = sum of:
            0.0295149 = weight(_text_:22 in 1781) [ClassicSimilarity], result of:
              0.0295149 = score(doc=1781,freq=2.0), product of:
                0.1271423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03630739 = queryNorm
                0.23214069 = fieldWeight in 1781, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1781)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Date
    22. 3.2008 14:35:21
    Issue
    2., überarb. u. erw. Aufl.
  9. Guba, B.: Unbekannte Portalwelten? : der Wegweiser! (2003) 0.01
    0.011853025 = product of:
      0.02370605 = sum of:
        0.006190138 = weight(_text_:h in 1937) [ClassicSimilarity], result of:
          0.006190138 = score(doc=1937,freq=2.0), product of:
            0.09020387 = queryWeight, product of:
              2.4844491 = idf(docFreq=10020, maxDocs=44218)
              0.03630739 = queryNorm
            0.06862386 = fieldWeight in 1937, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.4844491 = idf(docFreq=10020, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1937)
        0.015206535 = weight(_text_:u in 1937) [ClassicSimilarity], result of:
          0.015206535 = score(doc=1937,freq=4.0), product of:
            0.11888653 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03630739 = queryNorm
            0.12790798 = fieldWeight in 1937, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1937)
        0.0023093764 = weight(_text_:a in 1937) [ClassicSimilarity], result of:
          0.0023093764 = score(doc=1937,freq=6.0), product of:
            0.041864127 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03630739 = queryNorm
            0.05516361 = fieldWeight in 1937, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1937)
      0.5 = coord(3/6)
    
    Classification
    QR 760 Wirtschaftswissenschaften / Gewerbepolitik. Einzelne Wirtschaftszweige / Industrie, Bergbau, Handel, Dienstleistungen, Handwerk / Öffentliche Versorgungseinrichtungen. Elektrizität. Gas. Wasser / Informationsgewerbe (Massenmedien). Post / Neue Medien. Online-Dienste (Internet u. a.)
    Footnote
    Rez. in: Mitt. VOEB 61(2008) H.2, S.70-72 (M. Katzmayr): "Beate Guba beschäftigt sich im vorliegenden Buch mit Webportalen, wobei insbesondere Universitätsportale behandelt werden. Tatsächlich kann sich das Informationsmanagement an Hochschulen über einen Mangel an Herausforderungen nicht beklagen: wie die Autorin einleitend darstellt, haben diese nämlich sowohl administrative als auch wissenschaftliche Informationen in sehr großer Menge zu verwalten. Dabei sind die für spezifische Informationen benötigten Datenquellen sowohl im administrativen als auch wissenschaftlichen Bereich oft voneinander isoliert und befinden sich in heterogenen Systemen - unerwünschte Redundanzen und Inkonsistenzen sind die Folge. Verbunden mit der verstärkten Nachfrage nach elektronisch vorliegenden Fachinformationen im Wissenschaftsbetrieb ist ein planmäßiges, strukturiertes und effizientes Umgehen mit Informationen notwendig. Ein Portal kann hier wertvolle Dienste leisten - doch was ist darunter eigentlich genau zu verstehen? Nach einer Auseinandersetzung mit der einschlägigen Literatur gelangt Guba zu folgender Arbeitsdefinition: "Ein Portal ist [...] ein virtueller Ort für die Bereitstellung und Distribution von (über verschiedene Anwendungen verteilte) Daten und Informationen und ermöglicht eine durchgängig IT-gestützte Prozessteuerung übertechnische Systemgrenzen hinweg [...] Ein universitäres Informations- und Kommunikationsportal hat also die Funktion, auf der einen Seite den Universitätsbetrieb und auf der anderen Seite die grundlegenden Bestandteile des Wissenschaftsprozesses, nämlich die Gewinnung, Speicherung, Publikation und Vermittlung von Informationen bzw. Wissen sowie die wissenschaftliche Kommunikation, zu unterstützen". Als weiteres wesentliches Merkmal von Portalen kommt noch die Möglichkeit der Personalisierung der Funktionen hinzu.
    Location
    A
    RVK
    QR 760 Wirtschaftswissenschaften / Gewerbepolitik. Einzelne Wirtschaftszweige / Industrie, Bergbau, Handel, Dienstleistungen, Handwerk / Öffentliche Versorgungseinrichtungen. Elektrizität. Gas. Wasser / Informationsgewerbe (Massenmedien). Post / Neue Medien. Online-Dienste (Internet u. a.)
  10. Multimedia content and the Semantic Web : methods, standards, and tools (2005) 0.01
    0.01123579 = product of:
      0.02247158 = sum of:
        0.010752645 = weight(_text_:u in 150) [ClassicSimilarity], result of:
          0.010752645 = score(doc=150,freq=2.0), product of:
            0.11888653 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03630739 = queryNorm
            0.0904446 = fieldWeight in 150, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.01953125 = fieldNorm(doc=150)
        0.0046187527 = weight(_text_:a in 150) [ClassicSimilarity], result of:
          0.0046187527 = score(doc=150,freq=24.0), product of:
            0.041864127 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03630739 = queryNorm
            0.11032722 = fieldWeight in 150, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.01953125 = fieldNorm(doc=150)
        0.0071001826 = product of:
          0.021300547 = sum of:
            0.021300547 = weight(_text_:22 in 150) [ClassicSimilarity], result of:
              0.021300547 = score(doc=150,freq=6.0), product of:
                0.1271423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03630739 = queryNorm
                0.16753313 = fieldWeight in 150, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=150)
          0.33333334 = coord(1/3)
      0.5 = coord(3/6)
    
    Classification
    006.7 22
    Date
    7. 3.2007 19:30:22
    DDC
    006.7 22
    Editor
    Stamou, G. u. S. Kollias
    Footnote
    Rez. in: JASIST 58(2007) no.3, S.457-458 (A.M.A. Ahmad): "The concept of the semantic web has emerged because search engines and text-based searching are no longer adequate, as these approaches involve an extensive information retrieval process. The deployed searching and retrieving descriptors arc naturally subjective and their deployment is often restricted to the specific application domain for which the descriptors were configured. The new era of information technology imposes different kinds of requirements and challenges. Automatic extracted audiovisual features are required, as these features are more objective, domain-independent, and more native to audiovisual content. This book is a useful guide for researchers, experts, students, and practitioners; it is a very valuable reference and can lead them through their exploration and research in multimedia content and the semantic web. The book is well organized, and introduces the concept of the semantic web and multimedia content analysis to the reader through a logical sequence from standards and hypotheses through system examples, presenting relevant tools and methods. But in some chapters readers will need a good technical background to understand some of the details. Readers may attain sufficient knowledge here to start projects or research related to the book's theme; recent results and articles related to the active research area of integrating multimedia with semantic web technologies are included. This book includes full descriptions of approaches to specific problem domains such as content search, indexing, and retrieval. This book will be very useful to researchers in the multimedia content analysis field who wish to explore the benefits of emerging semantic web technologies in applying multimedia content approaches. The first part of the book covers the definition of the two basic terms multimedia content and semantic web. The Moving Picture Experts Group standards MPEG7 and MPEG21 are quoted extensively. In addition, the means of multimedia content description are elaborated upon and schematically drawn. This extensive description is introduced by authors who are actively involved in those standards and have been participating in the work of the International Organization for Standardization (ISO)/MPEG for many years. On the other hand, this results in bias against the ad hoc or nonstandard tools for multimedia description in favor of the standard approaches. This is a general book for multimedia content; more emphasis on the general multimedia description and extraction could be provided.
    Semantic web technologies are explained, and ontology representation is emphasized. There is an excellent summary of the fundamental theory behind applying a knowledge-engineering approach to vision problems. This summary represents the concept of the semantic web and multimedia content analysis. A definition of the fuzzy knowledge representation that can be used for realization in multimedia content applications has been provided, with a comprehensive analysis. The second part of the book introduces the multimedia content analysis approaches and applications. In addition, some examples of methods applicable to multimedia content analysis are presented. Multimedia content analysis is a very diverse field and concerns many other research fields at the same time; this creates strong diversity issues, as everything from low-level features (e.g., colors, DCT coefficients, motion vectors, etc.) up to the very high and semantic level (e.g., Object, Events, Tracks, etc.) are involved. The second part includes topics on structure identification (e.g., shot detection for video sequences), and object-based video indexing. These conventional analysis methods are supplemented by results on semantic multimedia analysis, including three detailed chapters on the development and use of knowledge models for automatic multimedia analysis. Starting from object-based indexing and continuing with machine learning, these three chapters are very logically organized. Because of the diversity of this research field, including several chapters of recent research results is not sufficient to cover the state of the art of multimedia. The editors of the book should write an introductory chapter about multimedia content analysis approaches, basic problems, and technical issues and challenges, and try to survey the state of the art of the field and thus introduce the field to the reader.
    The final part of the book discusses research in multimedia content management systems and the semantic web, and presents examples and applications for semantic multimedia analysis in search and retrieval systems. These chapters describe example systems in which current projects have been implemented, and include extensive results and real demonstrations. For example, real case scenarios such as ECommerce medical applications and Web services have been introduced. Topics in natural language, speech and image processing techniques and their application for multimedia indexing, and content-based retrieval have been elaborated upon with extensive examples and deployment methods. The editors of the book themselves provide the readers with a chapter about their latest research results on knowledge-based multimedia content indexing and retrieval. Some interesting applications for multimedia content and the semantic web are introduced. Applications that have taken advantage of the metadata provided by MPEG7 in order to realize advance-access services for multimedia content have been provided. The applications discussed in the third part of the book provide useful guidance to researchers and practitioners properly planning to implement semantic multimedia analysis techniques in new research and development projects in both academia and industry. A fourth part should be added to this book: performance measurements for integrated approaches of multimedia analysis and the semantic web. Performance of the semantic approach is a very sophisticated issue and requires extensive elaboration and effort. Measuring the semantic search is an ongoing research area; several chapters concerning performance measurement and analysis would be required to adequately cover this area and introduce it to readers."
  11. Weilenmann, A.-K.: Fachspezifische Internetrecherche : für Bibliothekare, Informationsspezialisten und Wissenschaftler (2001) 0.01
    0.010710057 = product of:
      0.021420114 = sum of:
        0.014856329 = weight(_text_:h in 507) [ClassicSimilarity], result of:
          0.014856329 = score(doc=507,freq=8.0), product of:
            0.09020387 = queryWeight, product of:
              2.4844491 = idf(docFreq=10020, maxDocs=44218)
              0.03630739 = queryNorm
            0.16469726 = fieldWeight in 507, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.4844491 = idf(docFreq=10020, maxDocs=44218)
              0.0234375 = fieldNorm(doc=507)
        0.0015999828 = weight(_text_:a in 507) [ClassicSimilarity], result of:
          0.0015999828 = score(doc=507,freq=2.0), product of:
            0.041864127 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03630739 = queryNorm
            0.03821847 = fieldWeight in 507, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0234375 = fieldNorm(doc=507)
        0.004963803 = product of:
          0.014891408 = sum of:
            0.014891408 = weight(_text_:29 in 507) [ClassicSimilarity], result of:
              0.014891408 = score(doc=507,freq=2.0), product of:
                0.12771805 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03630739 = queryNorm
                0.11659596 = fieldWeight in 507, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=507)
          0.33333334 = coord(1/3)
      0.5 = coord(3/6)
    
    Footnote
    Rez. in: Online-Mitteilungen 2006, H.88 [=Mitteilungen VÖB 2006, H.4], S.16-18 (M. Buzinkay): "Dass das Internet ein Heuhaufen sein kann, in dem die berühmt-berüchtigte Nadel nicht einmal annähernd gefunden werden kann, hat sich herumgesprochen. Orientierungshilfen und Wegweiser gibt es also viele, sowohl online als auch über traditionellere Medien wie dem Buch. Auch das vorliegende Werk von Anna-Katharina Weilenmann ordnet sich in diese Kategorie von Internet-Führern ein. Auf rund 200 Seiten werden Einstiege in verschiedenste Themen der Wissenschaft angeboten. Über so genannte Subject Gateways - nennen wir sie schlicht Themen-Portale - werden Wissenschaftsdisziplinen erschlossen, meist in einer kurzen, aber präzisen Beschreibung der online-Ressource. Jedes Sachgebiet wird zudem um Lexika, Enzyklopädien, Bibliographien und Datenbanken ergänzt. Die Ordnung der Sachgebiete orientiert sich an der Dewey Dezimalklassifikation. Die Bandbreite der Sachgebiete ist dementsprechend groß und orientiert sich an der Wissenschaft: - Bibliotheks- und Informationswissenschaft - Philosophie und Psychologie, Religion / Theologie - Sozialwissenschaften, Soziologie, Politik - Wirtschaft, Recht - Erziehung, Ethnologie, Sprache, Literaturwissenschaft - Mathematik, Physik, Chemie, Biologie - Technik - Medizin - Landwirtschaft, Informatik - Kunst, Architektur, Musik, Theater, Film - Sport - Geschichte Geographie, Reisen Bei der Auswahl der einzelnen Web-Quellen ließ sich die Autorin von Qualitätskriterien wie Alter der Webseite, der Zuverlässigkeit der Inhalte, der Aktualität aber auch von der Art der Trägerschaft leiten. Webseiten mit einem akademischen Hintergrund standen daher im Vordergrund, waren aber nicht ausschließlich vertreten. So finden sich auch Webseiten kommerzieller Anbieter (z.B. Scopus von Elsevier) oder auch anderer öffentlicher, nicht-akademischer Institutionen (wie der Österreichischen Nationalbibliothek mit Ariadne) im Webseiten-Verzeichnis. Rund 200 deutsch- und englischsprachige Einträge werden im Buch genauer beschrieben, mit Informationen zum Inhalt des Angebots, der Urheberschaft und Angabe möglicher Kosten. Auch weiterführende Links werden häufig angeführt. Ein einführendes Kapitel zur Informationsrecherche rundet dieses gelungene Buch ab.
    Weitere Rez: BuB 29(2007) H.1, S.71-72 (J. Plieninger)
    Weitere Rez: Information - Wissenschaft und Praxis 58(2007) H.5, S.317-318 (M. Katzmayr): "Fazit: Mit diesem Band ist eine interessante und relevante Zusammenstellung wichtiger Ausgangspunkte für thematische Webrecherchen geglückt, aufgrund seiner praktischen Relevanz ist ihm eine weite Verbreitung zu wünschen. Insbesondere Bibliothekare im fachlichen Auskunftsdienst in wissenschaftlichen oder größeren öffentlichen Bibliotheken können aus dieser gut sortierten Fundgrube hochwertiger Internetquellen einen großen Nutzen ziehen."
  12. New directions in cognitive information retrieval (2005) 0.01
    0.0101043675 = product of:
      0.020208735 = sum of:
        0.006190138 = weight(_text_:h in 338) [ClassicSimilarity], result of:
          0.006190138 = score(doc=338,freq=2.0), product of:
            0.09020387 = queryWeight, product of:
              2.4844491 = idf(docFreq=10020, maxDocs=44218)
              0.03630739 = queryNorm
            0.06862386 = fieldWeight in 338, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.4844491 = idf(docFreq=10020, maxDocs=44218)
              0.01953125 = fieldNorm(doc=338)
        0.010752645 = weight(_text_:u in 338) [ClassicSimilarity], result of:
          0.010752645 = score(doc=338,freq=2.0), product of:
            0.11888653 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03630739 = queryNorm
            0.0904446 = fieldWeight in 338, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.01953125 = fieldNorm(doc=338)
        0.0032659513 = weight(_text_:a in 338) [ClassicSimilarity], result of:
          0.0032659513 = score(doc=338,freq=12.0), product of:
            0.041864127 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03630739 = queryNorm
            0.07801312 = fieldWeight in 338, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.01953125 = fieldNorm(doc=338)
      0.5 = coord(3/6)
    
    Editor
    Spink, A. u. C. Cole
    Footnote
    Rez. in: Mitt. VÖB 59(2006) H.3, S.95-98 (O. Oberhauser): "Dieser Sammelband der Herausgeber A. Spink & C. Cole ist kurz vor ihrem im letzten Heft der Mitteilungen der VÖB besprochenen zweiten Buch erschienen. Er wendet sich an Informationswissenschaftler, Bibliothekare, Sozialwissenschaftler sowie Informatiker mit Interesse am Themenbereich Mensch-Computer-Interaktion und präsentiert einen Einblick in die aktuelle Forschung zum kognitiv orientierten Information Retrieval. Diese Richtung, die von der Analyse der Informationsprobleme der Benutzer und deren kognitivem Verhalten bei der Benutzung von Informationssystemen ausgeht, steht in einem gewissen Kontrast zum traditionell vorherrschenden IR-Paradigma, das sich auf die Optimierung der IR-Systeme und ihrer Effizienz konzentriert. "Cognitive information retrieval" oder CIR (natürlich geht es auch hier nicht ohne ein weiteres Akronym ab) ist ein interdisziplinärer Forschungsbereich, der Aktivitäten aus Informationswissenschaft, Informatik, Humanwissenschaften, Kognitionswissenschaft, Mensch-Computer-Interaktion und anderen informationsbezogenen Gebieten inkludiert.
    CIR Concepts - Interactive information retrieval: Bringing the user to a selection state, von Charles Cole et al. (Montréal), konzentriert sich auf den kognitiven Aspekt von Benutzern bei der Interaktion mit den bzw. der Reaktion auf die vom IR-System ausgesandten Stimuli; "selection" bezieht sich dabei auf die Auswahl, die das System den Benutzern abverlangt und die zur Veränderung ihrer Wissensstrukturen beiträgt. - Cognitive overlaps along the polyrepresentation continuum, von Birger Larsen und Peter Ingwersen (Kopenhagen), beschreibt einen auf Ingwersens Principle of Polyrepresentation beruhenden methodischen Ansatz, der dem IR-System ein breiteres Bild des Benutzers bzw. der Dokumente vermittelt als dies bei herkömmlichen, lediglich anfragebasierten Systemen möglich ist. - Integrating approaches to relevance, von Ian Ruthven (Glasgow), analysiert den Relevanzbegriff und schlägt anstelle des gegenwärtig in IR-Systemverwendeten, eindimensionalen Relevanzkonzepts eine multidimensionale Sichtweise vor. - New cognitive directions, von Nigel Ford (Sheffield), führt neue Begriffe ein: Ford schlägt anstelle von information need und information behaviour die Alternativen knowledge need und knowledge behaviour vor.
    CIR Processes - A multitasking framework for cognitive information retrieval, von Amanda Spink und Charles Cole (Australien/Kanada), sieht - im Gegensatz zu traditionellen Ansätzen - die simultane Bearbeitung verschiedener Aufgaben (Themen) während einer Informationssuche als den Normalfall an und analysiert das damit verbundene Benutzerverhalten. - Explanation in information seeking and retrieval, von Pertti Vakkari und Kalervo Järvelin (Tampere), plädiert anhand zweier empirischer Untersuchungen für die Verwendung des aufgabenorientierten Ansatzes ("task") in der IR-Forschung, gerade auch als Bindeglied zwischen nicht ausreichend mit einander kommunizierenden Disziplinen (Informationswissenschaft, Informatik, diverse Sozialwissenschaften). - Towards an alternative information retrieval system for children, von Jamshid Beheshti et al. (Montréal), berichtet über den Stand der IR-Forschung für Kinder und schlägt vor, eine Metapher aus dem Sozialkonstruktivismus (Lernen als soziales Verhandeln) als Gestaltungsprinzip für einschlägige IR-Systeme zu verwenden. CIR Techniques - Implicit feedback: using behavior to infer relevance, von Diane Kelly (North Carolina), setzt sich kritisch mit den Techniken zur Analyse des von Benutzern von IR-Systemen geäußerten Relevance-Feedbacks - explizit und implizit - auseinander. - Educational knowledge domain visualizations, von Peter Hook und Katy Börner (Indiana), beschreibt verschiedene Visualisierungstechniken zur Repräsentation von Wissensgebieten, die "Novizen" bei der Verwendung fachspezifischer IR-Systeme unterstützen sollen. - Learning and training to search, von Wendy Lucas und Heikki Topi (Massachusetts), analysiert, im breiteren Kontext der Information- Seeking-Forschung, Techniken zur Schulung von Benutzern von IRSystemen.
    Weitere Rez. in: JASIST 58(2007) no.5, S.758-760 (A. Gruzd): "Despite the minor drawbacks described, the book is a great source for researchers in the IR&S fields in general and in the CIR field in particular. Furthermore, different chapters of this book also might be of interest to members from other communities. For instance, librarians responsible for library instruction might find the chapter on search training by Lucas and Topi helpful in their work. Cognitive psychologists would probably be intrigued by Spink and Cole's view on multitasking. IR interface designers will likely find the chapter on KDV by Hook and Borner very beneficial. And students taking IR-related courses might find the thorough literature reviews by Ruthven and Kelly particularly useful when beginning their own research."
  13. Information und Wissen : global, sozial und frei? Proceedings des 12. Internationalen Symposiums für Informationswissenschaft (ISI 2011) ; Hildesheim, 9. - 11. März 2011 (2010) 0.01
    0.009804711 = product of:
      0.019609421 = sum of:
        0.006190138 = weight(_text_:h in 5190) [ClassicSimilarity], result of:
          0.006190138 = score(doc=5190,freq=2.0), product of:
            0.09020387 = queryWeight, product of:
              2.4844491 = idf(docFreq=10020, maxDocs=44218)
              0.03630739 = queryNorm
            0.06862386 = fieldWeight in 5190, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.4844491 = idf(docFreq=10020, maxDocs=44218)
              0.01953125 = fieldNorm(doc=5190)
        0.010752645 = weight(_text_:u in 5190) [ClassicSimilarity], result of:
          0.010752645 = score(doc=5190,freq=2.0), product of:
            0.11888653 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03630739 = queryNorm
            0.0904446 = fieldWeight in 5190, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.01953125 = fieldNorm(doc=5190)
        0.0026666378 = weight(_text_:a in 5190) [ClassicSimilarity], result of:
          0.0026666378 = score(doc=5190,freq=8.0), product of:
            0.041864127 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03630739 = queryNorm
            0.06369744 = fieldWeight in 5190, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.01953125 = fieldNorm(doc=5190)
      0.5 = coord(3/6)
    
    Content
    - Infometrics & Representations Steffen Hennicke, Marlies Olensky, Viktor de Boer, Antoine Isaac, Jan Wielemaker: A data model for cross-domain data representation Stefanie Haustein: Wissenschaftliche Zeitschriften im Web 2.0 " Philipp Leinenkugel, Werner Dees, Marc Rittberger: Abdeckung erziehungswissenschaftlicher Zeitschriften in Google Scholar - Information Retrieval Ari Pirkola: Constructing Topic-specific Search Keyphrase: Suggestion Tools for Web Information Retrieval Philipp Mayr, Peter Mutschke, Vivien Petras, Philipp Schaer, York Sure: Applying Science Models for Search Daniela Becks, Thomas Mandl, Christa Womser-Hacker: Spezielle Anforderungen bei der Evaluierung von Patent-Retrieval-Systemen Andrea Ernst-Gerlach, Dennis Korbar, Ära Awakian: Entwicklung einer Benutzeroberfläche zur interaktiven Regelgenerierung für die Suche in historischen Dokumenten - Multimedia Peter Schultes, Franz Lehner, Harald Kosch: Effects of real, media and presentation time in annotated video Marc Ritter, Maximilian Eibl: Ein erweiterbares Tool zur Annotation von Videos Margret Plank: AV-Portal für wissenschaftliche Filme: Analyse der Nutzerbedarfe Achim Oßwald: Significant properties digitaler Objekte
    - Information Professionals & Usage Rahmatollah Fattahi, Mohaddeseh Dokhtesmati, Maryam Saberi: A survey of internet searching skills among intermediate school students: How librarians can help Matthias Görtz: Kontextspezifische Erhebung von aufgabenbezogenem Informationssuchverhalten Jürgen Reischer, Daniel Lottes, Florian Meier, Matthias Stirner: Evaluation von Summarizing-Systemen Robert Mayo Hayes, Karin Karlics, Christian Schlögl: Bedarf an Informationsspezialisten in wissensintensiven Branchen der österreichischen Volkswirtschaft - User Experience fit Behavior Isto Huvila: Mining qualitative data on human information behaviour from the Web Rahel Birri Blezon, Rene Schneider: The Social Persona Approach Elena Shpilka, Ralph Koelle, Wolfgang Semar: "Mobile Tagging": Konzeption und Implementierung eines mobilen Informationssystems mit 2D-Tags Johannes Baeck, Sabine Wiem, Ralph Kölle, Thomas Mandl: User Interface Prototyping Nadine Mahrholz, Thomas Mandl, Joachim Griesbaum: Analyse und Evaluierung der Nutzung von Sitelinks Bernard Bekavac, Sonja Öttl, Thomas Weinhold: Online-Beratungskomponente für die Auswahl von Usability-Evaluationsmethoden
    - Information Domains & Concepts Michal Golinski: Use, but verify Mohammad Nazim, Bhaskar Mukherjee: Problems and prospects of implementing knowledge management in university libraries: A case study of Banaras Hindu University Library System Daniela Becks, Julia Maria Schulz: Domänenübergreifende Phrasenextraktion mithilfe einer lexikonunabhängigen Analysekomponente Wolfram Sperber, Bernd Wegner: Content Analysis in der Mathematik: Erschließung und Retrieval mathematischer Publikationen Jürgen Reischer: Das Konzept der Informativität - Information Society Joseph Adjei, Peter Tobbin: Identification Systems Adoption in Africa; The Case of Ghana Alexander Botte, Marc Rittberger, Christoph Schindler: Virtuelle Forschungsumgebungen Rainer Kuhlen: Der Streit um die Regelung des Zweitveröffentlichungsrechts im Urheberrecht - E-Learning / Social Media Marketing Tobias Fries, Sebastian Boosz, Andreas Henrich: Integrating industrial partners into e-teaching efforts Christopher Stehr, Melanie Hiller: E-Learningkurs Globalisierung Manuel Burghardt, Markus Heckner, Tim Schneidermeier, Christian Wolff: Social-Media-Marketing im Hochschulbereich
    - Posterpräsentationen Peter Böhm, Marc Rittberger: Nutzungsanalyse des Deutschen Bildungsservers und Konzeption eines Personalisierungsangebots Andreas Bohne-Lang, Elke Lang: A landmark in biomedical information: many ways are leading to PubMed Ina Blümel, Rene Berndt: 3 D-Modelle in bibliothekarischen Angeboten Nicolai Erbs, Daniel Bär, Iryna Gurevych, Torsten Zesch: First Aid for Information Chaos in Wikis Maria Gäde, Juliane Stiller: Multilingual Interface Usage Jasmin Hügi, Rahel Birri Blezon, Rene Schneider: Fassettierte Suche in Benutzeroberflächen von digitalen Bibliotheken Hanna Knäusl: Ordnung im Weltwissen Isabel Nündel, Erich Weichselgartner, Günter Krampen: Die European Psychology Publication Platform Projektteam IUWIS: IUWIS (Infrastruktur Urheberrecht in Wissenschaft und Bildung): Urheberrecht zwischen Fakten und Diskursen Helge Klaus Rieder: Die Kulturgüterdatenbank der Region Trier Karl Voit, Keith Andrews, Wolfgang Wintersteller, Wolfgang Slany: TagTree: Exploring Tag-Based Navigational Stnictures Jakob Voß, Mathias Schindler, Christian Thiele: Link Server aggregation with BEACON
    Editor
    Griesbaum, J., T. Mandl u. C. Womser-Hacker
    Footnote
    Rez. in: Mitt. VÖB. 65(2012) H.1, S.109-113 (O. Oberhauser).
  14. Franke, F; Klein, A.; Schüller-Zwierlein, A.: Schlüsselkompetenzen : Literatur recherchieren in Bibliotheken und Internet (2010) 0.01
    0.00976979 = product of:
      0.01953958 = sum of:
        0.00990422 = weight(_text_:h in 4721) [ClassicSimilarity], result of:
          0.00990422 = score(doc=4721,freq=2.0), product of:
            0.09020387 = queryWeight, product of:
              2.4844491 = idf(docFreq=10020, maxDocs=44218)
              0.03630739 = queryNorm
            0.10979818 = fieldWeight in 4721, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.4844491 = idf(docFreq=10020, maxDocs=44218)
              0.03125 = fieldNorm(doc=4721)
        0.0030169566 = weight(_text_:a in 4721) [ClassicSimilarity], result of:
          0.0030169566 = score(doc=4721,freq=4.0), product of:
            0.041864127 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03630739 = queryNorm
            0.072065435 = fieldWeight in 4721, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=4721)
        0.006618404 = product of:
          0.01985521 = sum of:
            0.01985521 = weight(_text_:29 in 4721) [ClassicSimilarity], result of:
              0.01985521 = score(doc=4721,freq=2.0), product of:
                0.12771805 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03630739 = queryNorm
                0.15546128 = fieldWeight in 4721, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4721)
          0.33333334 = coord(1/3)
      0.5 = coord(3/6)
    
    Date
    29. 8.2011 12:21:48
    Footnote
    Rez. in: ZfBB 58(2011) H.3/4, S.240-242 (S. Köppl)
  15. Chu, H.: Information representation and retrieval in the digital age (2010) 0.01
    0.0097256685 = product of:
      0.019451337 = sum of:
        0.0070033413 = weight(_text_:h in 92) [ClassicSimilarity], result of:
          0.0070033413 = score(doc=92,freq=4.0), product of:
            0.09020387 = queryWeight, product of:
              2.4844491 = idf(docFreq=10020, maxDocs=44218)
              0.03630739 = queryNorm
            0.077639036 = fieldWeight in 92, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.4844491 = idf(docFreq=10020, maxDocs=44218)
              0.015625 = fieldNorm(doc=92)
        0.008602115 = weight(_text_:u in 92) [ClassicSimilarity], result of:
          0.008602115 = score(doc=92,freq=2.0), product of:
            0.11888653 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03630739 = queryNorm
            0.07235568 = fieldWeight in 92, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.015625 = fieldNorm(doc=92)
        0.00384588 = weight(_text_:a in 92) [ClassicSimilarity], result of:
          0.00384588 = score(doc=92,freq=26.0), product of:
            0.041864127 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03630739 = queryNorm
            0.09186576 = fieldWeight in 92, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.015625 = fieldNorm(doc=92)
      0.5 = coord(3/6)
    
    Footnote
    Rez. in: JASIST 56(2005) no.2, S.215-216 (A. Heath): "What is small, thoroughly organized, and easy to understand? Well, it's Heting Chu's latest book an information retrieval. A very welcome release, this small literary addition to the field (only 248 pages) contains a concise and weIl-organized discussion of every major topic in information retrieval. The often-complex field of information retrieval is presented from its origin in the early 1950s to the present day. The organization of this text is top-notch, thus making this an easy read for even the novice. Unlike other titles in this area, Chu's user-friendly style of writing is done an purpose to properly introduce newcomers to the field in a less intimidating way. As stated by the author in the Preface, the purpose of the book is to "present a systematic, thorough yet nontechnical view of the field by using plain language to explain complex subjects." Chu has definitely struck up the right combination of ingredients. In a field so broad and complex, a well-organized presentation of topics that don't trip an themselves is essential. The use of plain language where possible is also a good choice for this topic because it allows one to absorb topics that are, by nature, not as easy to grasp. For instance, Chapters 6 and 7, which cover retrieval approaches and techniques, an often painstaking topic for many students and teachers is deftly handled with the use of tables that can be used to compare and contrast the various models discussed. I particularly loved Chu's use of Koll's 2000 article from the Bulletin of the American Society for Information Science to explain subject searching at the beginning of Chapter 6, which discusses the differences between browsing and searching. The Koll article uses the task of finding a needle in a haystack as an analogy.
    Chu's intent with this book is clear throughout the entire text. With this presentation, she writes with the novice in mind or as she puls it in the Preface, "to anyone who is interested in learning about the field, particularly those who are new to it." After reading the text, I found that this book is also an appropriate reference book for those who are somewhat advanced in the field. I found the chapters an information retrieval models and techniques, metadata, and AI very informative in that they contain information that is often rather densely presented in other texts. Although, I must say, the metadata section in Chapter 3 is pretty basic and contains more questions about the area than information. . . . It is an excellent book to have in the classroom, an your bookshelf, etc. It reads very well and is written with the reader in mind. If you are in need of a more advanced or technical text an the subject, this is not the book for you. But, if you are looking for a comprehensive, manual that can be used as a "flip-through," then you are in luck."
    Weitere Rez. in: Rez. in: nfd 55(2004) H.4, S.252 (D. Lewandowski):"Die Zahl der Bücher zum Thema Information Retrieval ist nicht gering, auch in deutscher Sprache liegen einige Titel vor. Trotzdem soll ein neues (englischsprachiges) Buch zu diesem Thema hier besprochen werden. Dieses zeichnet sich durch eine Kürze (nur etwa 230 Seiten Text) und seine gute Verständlichkeit aus und richtet sich damit bevorzugt an Studenten in den ersten Semestern. Heting Chu unterrichtet seit 1994 an Palmer School of Library and Information Science der Long Island University New York. Dass die Autorin viel Erfahrung in der Vermittlung des Stoffs in ihren Information-Retrieval-Veranstaltungen sammeln konnte, merkt man dem Buch deutlich an. Es ist einer klaren und verständlichen Sprache geschrieben und führt in die Grundlagen der Wissensrepräsentation und des Information Retrieval ein. Das Lehrbuch behandelt diese Themen als Gesamtkomplex und geht damit über den Themenbereich ähnlicher Bücher hinaus, die sich in der Regel auf das Retrieval beschränken. Das Buch ist in zwölf Kapitel gegliedert, wobei das erste Kapitel eine Übersicht über die zu behandelnden Themen gibt und den Leser auf einfache Weise in die Grundbegriffe und die Geschichte des IRR einführt. Neben einer kurzen chronologischen Darstellung der Entwicklung der IRR-Systeme werden auch vier Pioniere des Gebiets gewürdigt: Mortimer Taube, Hans Peter Luhn, Calvin N. Mooers und Gerard Salton. Dies verleiht dem von Studenten doch manchmal als trocken empfundenen Stoff eine menschliche Dimension. Das zweite und dritte Kapitel widmen sich der Wissensrepräsentation, wobei zuerst die grundlegenden Ansätze wie Indexierung, Klassifikation und Abstracting besprochen werden. Darauf folgt die Behandlung von Wissensrepräsentation mittels Metadaten, wobei v.a. neuere Ansätze wie Dublin Core und RDF behandelt werden. Weitere Unterkapitel widmen sich der Repräsentation von Volltexten und von Multimedia-Informationen. Die Stellung der Sprache im IRR wird in einem eigenen Kapitel behandelt. Dabei werden in knapper Form verschiedene Formen des kontrollierten Vokabulars und die wesentlichen Unterscheidungsmerkmale zur natürlichen Sprache erläutert. Die Eignung der beiden Repräsentationsmöglichkeiten für unterschiedliche IRR-Zwecke wird unter verschiedenen Aspekten diskutiert.
    Theme
    Grundlagen u. Einführungen: Allgemeine Literatur
  16. ¬The history and heritage of scientific and technological information systems : Proceedings of the 2002 Conference (2004) 0.01
    0.009668771 = product of:
      0.029006314 = sum of:
        0.025806347 = weight(_text_:u in 5897) [ClassicSimilarity], result of:
          0.025806347 = score(doc=5897,freq=2.0), product of:
            0.11888653 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03630739 = queryNorm
            0.21706703 = fieldWeight in 5897, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.046875 = fieldNorm(doc=5897)
        0.0031999657 = weight(_text_:a in 5897) [ClassicSimilarity], result of:
          0.0031999657 = score(doc=5897,freq=2.0), product of:
            0.041864127 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03630739 = queryNorm
            0.07643694 = fieldWeight in 5897, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=5897)
      0.33333334 = coord(2/6)
    
    Content
    Enthält u.a. die Beiträge: Fugmann, R.: Learning the lessons of the past; Davis, C.H.: Indexing and index editing at Chemical Abstracts before the Registry System; Roe , E.M.: Abstracts and indexes to branded full text: what's in a name?; Lynch, M.F.: Introduction of computers in chemical structure information systems, or what is not recorded in the annals; Baatz, S.: Medical science and medical informatics: The visible human project, 1986-2000.
    Editor
    Rayward, W.B. u. M.E. Bowden
  17. Broughton, V.: Essential thesaurus construction (2006) 0.01
    0.009162226 = product of:
      0.018324452 = sum of:
        0.00495211 = weight(_text_:h in 2924) [ClassicSimilarity], result of:
          0.00495211 = score(doc=2924,freq=2.0), product of:
            0.09020387 = queryWeight, product of:
              2.4844491 = idf(docFreq=10020, maxDocs=44218)
              0.03630739 = queryNorm
            0.05489909 = fieldWeight in 2924, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.4844491 = idf(docFreq=10020, maxDocs=44218)
              0.015625 = fieldNorm(doc=2924)
        0.008602115 = weight(_text_:u in 2924) [ClassicSimilarity], result of:
          0.008602115 = score(doc=2924,freq=2.0), product of:
            0.11888653 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03630739 = queryNorm
            0.07235568 = fieldWeight in 2924, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.015625 = fieldNorm(doc=2924)
        0.0047702272 = weight(_text_:a in 2924) [ClassicSimilarity], result of:
          0.0047702272 = score(doc=2924,freq=40.0), product of:
            0.041864127 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03630739 = queryNorm
            0.11394546 = fieldWeight in 2924, product of:
              6.3245554 = tf(freq=40.0), with freq of:
                40.0 = termFreq=40.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.015625 = fieldNorm(doc=2924)
      0.5 = coord(3/6)
    
    Abstract
    Many information professionals working in small units today fail to find the published tools for subject-based organization that are appropriate to their local needs, whether they are archivists, special librarians, information officers, or knowledge or content managers. Large established standards for document description and organization are too unwieldy, unnecessarily detailed, or too expensive to install and maintain. In other cases the available systems are insufficient for a specialist environment, or don't bring things together in a helpful way. A purpose built, in-house system would seem to be the answer, but too often the skills necessary to create one are lacking. This practical text examines the criteria relevant to the selection of a subject-management system, describes the characteristics of some common types of subject tool, and takes the novice step by step through the process of creating a system for a specialist environment. The methodology employed is a standard technique for the building of a thesaurus that incidentally creates a compatible classification or taxonomy, both of which may be used in a variety of ways for document or information management. Key areas covered are: What is a thesaurus? Tools for subject access and retrieval; what a thesaurus is used for? Why use a thesaurus? Examples of thesauri; the structure of a thesaurus; thesaural relationships; practical thesaurus construction; the vocabulary of the thesaurus; building the systematic structure; conversion to alphabetic format; forms of entry in the thesaurus; maintaining the thesaurus; thesaurus software; and; the wider environment. Essential for the practising information professional, this guide is also valuable for students of library and information science.
    Footnote
    Rez. in: Mitt. VÖB 60(2007) H.1, S.98-101 (O. Oberhauser): "Die Autorin von Essential thesaurus construction (and essential taxonomy construction, so der implizite Untertitel, vgl. S. 1) ist durch ihre Lehrtätigkeit an der bekannten School of Library, Archive and Information Studies des University College London und durch ihre bisherigen Publikationen auf den Gebieten (Facetten-)Klassifikation und Thesaurus fachlich einschlägig ausgewiesen. Nach Essential classification liegt nun ihr Thesaurus-Lehrbuch vor, mit rund 200 Seiten Text und knapp 100 Seiten Anhang ein handliches Werk, das seine Genese zum Grossteil dem Lehrbetrieb verdankt, wie auch dem kurzen Einleitungskapitel zu entnehmen ist. Das Buch ist der Schule von Jean Aitchison et al. verpflichtet und wendet sich an "the indexer" im weitesten Sinn, d.h. an alle Personen, die ein strukturiertes, kontrolliertes Fachvokabular für die Zwecke der sachlichen Erschliessung und Suche erstellen wollen bzw. müssen. Es möchte dieser Zielgruppe das nötige methodische Rüstzeug für eine solche Aufgabe vermitteln, was einschliesslich der Einleitung und der Schlussbemerkungen in zwanzig Kapiteln geschieht - eine ansprechende Strukturierung, die ein wohldosiertes Durcharbeiten möglich macht. Zu letzterem tragen auch die von der Autorin immer wieder gestellten Übungsaufgaben bei (Lösungen jeweils am Kapitelende). Zu Beginn der Darstellung wird der "information retrieval thesaurus" von dem (zumindest im angelsächsischen Raum) weit öfter mit dem Thesaurusbegriff assoziierten "reference thesaurus" abgegrenzt, einem nach begrifflicher Ähnlichkeit angeordneten Synonymenwörterbuch, das gerne als Mittel zur stilistischen Verbesserung beim Abfassen von (wissenschaftlichen) Arbeiten verwendet wird. Ohne noch ins Detail zu gehen, werden optische Erscheinungsform und Anwendungsgebiete von Thesauren vorgestellt, der Thesaurus als postkoordinierte Indexierungssprache erläutert und seine Nähe zu facettierten Klassifikationssystemen erwähnt. In der Folge stellt Broughton die systematisch organisierten Systeme (Klassifikation/ Taxonomie, Begriffs-/Themendiagramme, Ontologien) den alphabetisch angeordneten, wortbasierten (Schlagwortlisten, thesaurusartige Schlagwortsysteme und Thesauren im eigentlichen Sinn) gegenüber, was dem Leser weitere Einordnungshilfen schafft. Die Anwendungsmöglichkeiten von Thesauren als Mittel der Erschliessung (auch als Quelle für Metadatenangaben bei elektronischen bzw. Web-Dokumenten) und der Recherche (Suchformulierung, Anfrageerweiterung, Browsing und Navigieren) kommen ebenso zur Sprache wie die bei der Verwendung natürlichsprachiger Indexierungssysteme auftretenden Probleme. Mit Beispielen wird ausdrücklich auf die mehr oder weniger starke fachliche Spezialisierung der meisten dieser Vokabularien hingewiesen, wobei auch Informationsquellen über Thesauren (z.B. www.taxonomywarehouse.com) sowie Thesauren für nicht-textuelle Ressourcen kurz angerissen werden.
    In den stärker ins Detail gehenden Kapiteln weist Broughton zunächst auf die Bedeutung des systematischen Teils eines Thesaurus neben dem alphabetischen Teil hin und erläutert dann die Elemente des letzteren, wobei neben den gängigen Thesaurusrelationen auch die Option der Ausstattung der Einträge mit Notationen eines Klassifikationssystems erwähnt wird. Die Thesaurusrelationen selbst werden später noch in einem weiteren Kapitel ausführlicher diskutiert, wobei etwa auch die polyhierarchische Beziehung thematisiert wird. Zwei Kapitel zur Vokabularkontrolle führen in Aspekte wie Behandlung von Synonymen, Vermeidung von Mehrdeutigkeit, Wahl der bevorzugten Terme sowie die Formen von Thesauruseinträgen ein (grammatische Form, Schreibweise, Zeichenvorrat, Singular/Plural, Komposita bzw. deren Zerlegung usw.). Insgesamt acht Kapitel - in der Abfolge mit den bisher erwähnten Abschnitten didaktisch geschickt vermischt - stehen unter dem Motto "Building a thesaurus". Kurz zusammengefasst, geht es dabei um folgende Tätigkeiten und Prozesse: - Sammlung des Vokabulars unter Nutzung entsprechender Quellen; - Termextraktion aus den Titeln von Dokumenten und Probleme hiebei; - Analyse des Vokabulars (Facettenmethode); - Einbau einer internen Struktur (Facetten und Sub-Facetten, Anordnung der Terme); - Erstellung einer hierarchischen Struktur und deren Repräsentation; - Zusammengesetzte Themen bzw. Begriffe (Facettenanordnung: filing order vs. citation order); - Konvertierung der taxonomischen Anordnung in ein alphabetisches Format (Auswahl der Vorzugsbegriffe, Identifizieren hierarchischer Beziehungen, verwandter Begriffe usw.); - Erzeugen der endgültigen Thesaurus-Einträge.
    Weitere Rez. in: New Library World 108(2007) nos.3/4, S.190-191 (K.V. Trickey): "Vanda has provided a very useful work that will enable any reader who is prepared to follow her instruction to produce a thesaurus that will be a quality language-based subject access tool that will make the task of information retrieval easier and more effective. Once again I express my gratitude to Vanda for producing another excellent book." - Electronic Library 24(2006) no.6, S.866-867 (A.G. Smith): "Essential thesaurus construction is an ideal instructional text, with clear bullet point summaries at the ends of sections, and relevant and up to date references, putting thesauri in context with the general theory of information retrieval. But it will also be a valuable reference for any information professional developing or using a controlled vocabulary." - KO 33(2006) no.4, S.215-216 (M.P. Satija)
    Theme
    Grundlagen u. Einführungen: Allgemeine Literatur
  18. Information visualization in data mining and knowledge discovery (2002) 0.01
    0.008660222 = product of:
      0.017320445 = sum of:
        0.008602115 = weight(_text_:u in 1789) [ClassicSimilarity], result of:
          0.008602115 = score(doc=1789,freq=2.0), product of:
            0.11888653 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03630739 = queryNorm
            0.07235568 = fieldWeight in 1789, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.015625 = fieldNorm(doc=1789)
        0.005438896 = weight(_text_:a in 1789) [ClassicSimilarity], result of:
          0.005438896 = score(doc=1789,freq=52.0), product of:
            0.041864127 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03630739 = queryNorm
            0.12991782 = fieldWeight in 1789, product of:
              7.2111025 = tf(freq=52.0), with freq of:
                52.0 = termFreq=52.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.015625 = fieldNorm(doc=1789)
        0.0032794336 = product of:
          0.009838301 = sum of:
            0.009838301 = weight(_text_:22 in 1789) [ClassicSimilarity], result of:
              0.009838301 = score(doc=1789,freq=2.0), product of:
                0.1271423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03630739 = queryNorm
                0.07738023 = fieldWeight in 1789, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1789)
          0.33333334 = coord(1/3)
      0.5 = coord(3/6)
    
    Date
    23. 3.2008 19:10:22
    Editor
    Fayyad, U. et al.
    Footnote
    Rez. in: JASIST 54(2003) no.9, S.905-906 (C.A. Badurek): "Visual approaches for knowledge discovery in very large databases are a prime research need for information scientists focused an extracting meaningful information from the ever growing stores of data from a variety of domains, including business, the geosciences, and satellite and medical imagery. This work presents a summary of research efforts in the fields of data mining, knowledge discovery, and data visualization with the goal of aiding the integration of research approaches and techniques from these major fields. The editors, leading computer scientists from academia and industry, present a collection of 32 papers from contributors who are incorporating visualization and data mining techniques through academic research as well application development in industry and government agencies. Information Visualization focuses upon techniques to enhance the natural abilities of humans to visually understand data, in particular, large-scale data sets. It is primarily concerned with developing interactive graphical representations to enable users to more intuitively make sense of multidimensional data as part of the data exploration process. It includes research from computer science, psychology, human-computer interaction, statistics, and information science. Knowledge Discovery in Databases (KDD) most often refers to the process of mining databases for previously unknown patterns and trends in data. Data mining refers to the particular computational methods or algorithms used in this process. The data mining research field is most related to computational advances in database theory, artificial intelligence and machine learning. This work compiles research summaries from these main research areas in order to provide "a reference work containing the collection of thoughts and ideas of noted researchers from the fields of data mining and data visualization" (p. 8). It addresses these areas in three main sections: the first an data visualization, the second an KDD and model visualization, and the last an using visualization in the knowledge discovery process. The seven chapters of Part One focus upon methodologies and successful techniques from the field of Data Visualization. Hoffman and Grinstein (Chapter 2) give a particularly good overview of the field of data visualization and its potential application to data mining. An introduction to the terminology of data visualization, relation to perceptual and cognitive science, and discussion of the major visualization display techniques are presented. Discussion and illustration explain the usefulness and proper context of such data visualization techniques as scatter plots, 2D and 3D isosurfaces, glyphs, parallel coordinates, and radial coordinate visualizations. Remaining chapters present the need for standardization of visualization methods, discussion of user requirements in the development of tools, and examples of using information visualization in addressing research problems.
    In 13 chapters, Part Two provides an introduction to KDD, an overview of data mining techniques, and examples of the usefulness of data model visualizations. The importance of visualization throughout the KDD process is stressed in many of the chapters. In particular, the need for measures of visualization effectiveness, benchmarking for identifying best practices, and the use of standardized sample data sets is convincingly presented. Many of the important data mining approaches are discussed in this complementary context. Cluster and outlier detection, classification techniques, and rule discovery algorithms are presented as the basic techniques common to the KDD process. The potential effectiveness of using visualization in the data modeling process are illustrated in chapters focused an using visualization for helping users understand the KDD process, ask questions and form hypotheses about their data, and evaluate the accuracy and veracity of their results. The 11 chapters of Part Three provide an overview of the KDD process and successful approaches to integrating KDD, data mining, and visualization in complementary domains. Rhodes (Chapter 21) begins this section with an excellent overview of the relation between the KDD process and data mining techniques. He states that the "primary goals of data mining are to describe the existing data and to predict the behavior or characteristics of future data of the same type" (p. 281). These goals are met by data mining tasks such as classification, regression, clustering, summarization, dependency modeling, and change or deviation detection. Subsequent chapters demonstrate how visualization can aid users in the interactive process of knowledge discovery by graphically representing the results from these iterative tasks. Finally, examples of the usefulness of integrating visualization and data mining tools in the domain of business, imagery and text mining, and massive data sets are provided. This text concludes with a thorough and useful 17-page index and lengthy yet integrating 17-page summary of the academic and industrial backgrounds of the contributing authors. A 16-page set of color inserts provide a better representation of the visualizations discussed, and a URL provided suggests that readers may view all the book's figures in color on-line, although as of this submission date it only provides access to a summary of the book and its contents. The overall contribution of this work is its focus an bridging two distinct areas of research, making it a valuable addition to the Morgan Kaufmann Series in Database Management Systems. The editors of this text have met their main goal of providing the first textbook integrating knowledge discovery, data mining, and visualization. Although it contributes greatly to our under- standing of the development and current state of the field, a major weakness of this text is that there is no concluding chapter to discuss the contributions of the sum of these contributed papers or give direction to possible future areas of research. "Integration of expertise between two different disciplines is a difficult process of communication and reeducation. Integrating data mining and visualization is particularly complex because each of these fields in itself must draw an a wide range of research experience" (p. 300). Although this work contributes to the crossdisciplinary communication needed to advance visualization in KDD, a more formal call for an interdisciplinary research agenda in a concluding chapter would have provided a more satisfying conclusion to a very good introductory text.
    With contributors almost exclusively from the computer science field, the intended audience of this work is heavily slanted towards a computer science perspective. However, it is highly readable and provides introductory material that would be useful to information scientists from a variety of domains. Yet, much interesting work in information visualization from other fields could have been included giving the work more of an interdisciplinary perspective to complement their goals of integrating work in this area. Unfortunately, many of the application chapters are these, shallow, and lack complementary illustrations of visualization techniques or user interfaces used. However, they do provide insight into the many applications being developed in this rapidly expanding field. The authors have successfully put together a highly useful reference text for the data mining and information visualization communities. Those interested in a good introduction and overview of complementary research areas in these fields will be satisfied with this collection of papers. The focus upon integrating data visualization with data mining complements texts in each of these fields, such as Advances in Knowledge Discovery and Data Mining (Fayyad et al., MIT Press) and Readings in Information Visualization: Using Vision to Think (Card et. al., Morgan Kauffman). This unique work is a good starting point for future interaction between researchers in the fields of data visualization and data mining and makes a good accompaniment for a course focused an integrating these areas or to the main reference texts in these fields."
  19. Belew, R.K.: Finding out about : a cognitive perspective on search engine technology and the WWW (2001) 0.01
    0.008093207 = product of:
      0.02427962 = sum of:
        0.01720423 = weight(_text_:u in 3346) [ClassicSimilarity], result of:
          0.01720423 = score(doc=3346,freq=2.0), product of:
            0.11888653 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03630739 = queryNorm
            0.14471136 = fieldWeight in 3346, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03125 = fieldNorm(doc=3346)
        0.0070753903 = weight(_text_:a in 3346) [ClassicSimilarity], result of:
          0.0070753903 = score(doc=3346,freq=22.0), product of:
            0.041864127 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03630739 = queryNorm
            0.16900843 = fieldWeight in 3346, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=3346)
      0.33333334 = coord(2/6)
    
    Abstract
    The World Wide Web is rapidly filling with more text than anyone could have imagined even a short time ago, but the task of isolating relevant parts of this vast information has become just that much more daunting. Richard Belew brings a cognitive perspective to the study of information retrieval as a discipline within computer science. He introduces the idea of Finding Out About (FDA) as the process of actively seeking out information relevant to a topic of interest and describes its many facets - ranging from creating a good characterization of what the user seeks, to what documents actually mean, to methods of inferring semantic clues about each document, to the problem of evaluating whether our search engines are performing as we have intended. Finding Out About explains how to build the tools that are useful for searching collections of text and other media. In the process it takes a close look at the properties of textual documents that do not become clear until very large collections of them are brought together and shows that the construction of effective search engines requires knowledge of the statistical and mathematical properties of linguistic phenomena, as well as an appreciation for the cognitive foundation we bring to the task as language users. The unique approach of this book is its even handling of the phenomena of both numbers and words, making it accessible to a wide audience. The textbook is usable in both undergraduate and graduate classes on information retrieval, library science, and computational linguistics. The text is accompanied by a CD-ROM that contains a hypertext version of the book, including additional topics and notes not present in the printed edition. In addition, the CD contains the full text of C.J. "Keith" van Rijsbergen's famous textbook, Information Retrieval (now out of print). Many active links from Belew's to van Rijsbergen's hypertexts help to unite the material. Several test corpora and indexing tools are provided, to support the design of your own search engine. Additional exercises using these corpora and code are available to instructors. Also supporting this book is a Web site that will include recent additions to the book, as well as links to sites of new topics and methods.
    Theme
    Grundlagen u. Einführungen: Allgemeine Literatur
  20. Poetzsch, E.: Information Retrieval : Einführung in Grundlagen und Methoden (2006) 0.01
    0.0078027286 = product of:
      0.023408186 = sum of:
        0.010505011 = weight(_text_:h in 592) [ClassicSimilarity], result of:
          0.010505011 = score(doc=592,freq=4.0), product of:
            0.09020387 = queryWeight, product of:
              2.4844491 = idf(docFreq=10020, maxDocs=44218)
              0.03630739 = queryNorm
            0.11645855 = fieldWeight in 592, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.4844491 = idf(docFreq=10020, maxDocs=44218)
              0.0234375 = fieldNorm(doc=592)
        0.012903173 = weight(_text_:u in 592) [ClassicSimilarity], result of:
          0.012903173 = score(doc=592,freq=2.0), product of:
            0.11888653 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03630739 = queryNorm
            0.10853352 = fieldWeight in 592, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0234375 = fieldNorm(doc=592)
      0.33333334 = coord(2/6)
    
    Footnote
    Rez. in: Online-Mitteilungen 2006, H.88, S.13-15 [=Mitteilungen VOEB 59(2006) H.4] (M. Katzmayr): "Dieses Lehrbuch nun in der 5., völlig neu bearbeiteten Auflage vorliegend - hat zum Ziel, eine praxisorientierte Einführung in das Information Retrieval (IR) zu liefern. Es stellt gemeinsam mit den von derselben Autorin verfassten fachbezogenen Bänden "Wirtschaftsinformation: Online, CD-ROM, Internet" und "Naturwissenschaftlich-technische Information: Online,, CD-ROM, Internet" eine dreiteilige Gesamtausgabe zum IR dar. Der hier besprochene einführende Band gliedert sich in Grundlagen, Methoden und fachbezogene Aspekte (letzteres Kapitel wird in den erwähnten ergänzenden Bänden vertiefend behandelt). Dass es sich bei diesem Band um ein Lehrbuch handelt, wird nicht zuletzt durch Wiederholungsfragen am Ende jedes Kapitels, Rechercheübungen und einige Hausübungen verdeutlicht. Der Schwerpunkt liegt bei lizenzpflichtigen OnlineDatenbanken, das Web Information Retrieval wird nicht behandelt. Das erste Kapitel, "Grundlagen des Information Retrieval", vermittelt ein Basiswissen rund um Recherchedatenbanken und ihren Einsatz, etwa wie Datenbanken gegliedert und einheitlich beschrieben werden können, wie Datensätze in Abhängigkeit der gespeicherten Informationen üblicherweise strukturiert sind, welche Arbeitsschritte eine Recherche typischerweise aufweist oder wie sich die Kosten einer Online-Recherche kategorisieren lassen. Schließlich wird auch eine knappe Marktübersicht wichtiger kommerzieller Datenbankanbieter gegeben. .Im folgenden Kapitel, "Methoden des Information Retrieval", wird das Kommandoretrieval anhand der Abfragesprache DataStarOnline (DSO), die beim Host Dialog DataStar zur Anwendung kommt, erklärt. Neben Grundfunktionen wie Datenbankeinwahl und -wechsel werden die Verwendung von Such und Näheoperatoren, Trunkierung, Limitierung und Befehle zur Anzeige und Ausgabe der Suchergebnisse sowie ausgewählte spezielle Funktionen ausführlich dargestellt. Anschließend findet sich eine mit Screenshots dokumentierte Anleitung zur Benutzung der Websuchoberflächen des Hosts.
    Theme
    Grundlagen u. Einführungen: Allgemeine Literatur

Years

Languages

  • e 44
  • d 27

Types

  • m 70
  • s 24
  • i 2
  • d 1
  • el 1
  • r 1
  • More… Less…

Subjects

Classifications