Search (10 results, page 1 of 1)

  • × classification_ss:"TVV (DU)"
  1. Schweibenz, W.; Thissen, F.: Qualität im Web : Benutzerfreundliche Webseiten durch Usability Evaluation (2003) 0.05
    0.051901307 = product of:
      0.18165457 = sum of:
        0.055663757 = weight(_text_:wide in 767) [ClassicSimilarity], result of:
          0.055663757 = score(doc=767,freq=6.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.42394912 = fieldWeight in 767, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=767)
        0.06752606 = weight(_text_:web in 767) [ClassicSimilarity], result of:
          0.06752606 = score(doc=767,freq=30.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.69824153 = fieldWeight in 767, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=767)
        0.05177324 = weight(_text_:elektronische in 767) [ClassicSimilarity], result of:
          0.05177324 = score(doc=767,freq=4.0), product of:
            0.14013545 = queryWeight, product of:
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.029633347 = queryNorm
            0.3694514 = fieldWeight in 767, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.0390625 = fieldNorm(doc=767)
        0.0066915164 = product of:
          0.020074548 = sum of:
            0.020074548 = weight(_text_:22 in 767) [ClassicSimilarity], result of:
              0.020074548 = score(doc=767,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.19345059 = fieldWeight in 767, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=767)
          0.33333334 = coord(1/3)
      0.2857143 = coord(4/14)
    
    Abstract
    Für Webseiten ist, wie für alle interaktiven Anwendungen vom einfachen Automaten bis zur komplexen Software, die Benutzerfreundlichkeit von zentraler Bedeutung. Allerdings wird eine sinnvolle Benutzung von Informationsangeboten im World Wide Web häufig durch "cooles Design" unnötig erschwert, weil zentrale Punkte der Benutzerfreundlichkeit (Usability) vernachlässigt werden. Durch Usability Evaluation kann die Benutzerfreundlichkeit von Webseiten und damit auch die Akzeptanz bei den Benutzern verbessert werden. Ziel ist die Gestaltung von ansprechenden benutzerfreundlichen Webangeboten, die den Benutzern einen effektiven und effizienten Dialog ermöglichen. Das Buch bietet eine praxisorientierte Einführung in die Web Usability Evaluation und beschreibt die Anwendung ihrer verschiedenen Methoden.
    BK
    05.38 / Neue elektronische Medien <Kommunikationswissenschaft>
    Classification
    ST 252 Informatik / Monographien / Software und -entwicklung / Web-Programmierung, allgemein
    05.38 / Neue elektronische Medien <Kommunikationswissenschaft>
    Content
    Einführung.- Grundlagen des Web-Designs.- Usability und Usability Engineering.- Usability Engineering und das Web.- Methodenfragen zur Usability Evaluation.Expertenorientierte Methoden.- Benutzerorientierte Methoden.- Suchmaschinenorientierte Methoden.- Literatur.Glossar.- Index.- Checklisten.
    Date
    22. 3.2008 14:24:08
    RSWK
    Web-Seite / Gestaltung / Benutzerorientierung / Benutzerfreundlichkeit
    World Wide Web / Web Site / Gebrauchswert / Kundenorientierung / Kommunikationsdesign (GBV)
    Web-Seite / Qualität (BVB)
    RVK
    ST 252 Informatik / Monographien / Software und -entwicklung / Web-Programmierung, allgemein
    Subject
    Web-Seite / Gestaltung / Benutzerorientierung / Benutzerfreundlichkeit
    World Wide Web / Web Site / Gebrauchswert / Kundenorientierung / Kommunikationsdesign (GBV)
    Web-Seite / Qualität (BVB)
  2. Spinning the Semantic Web : bringing the World Wide Web to its full potential (2003) 0.04
    0.04208579 = product of:
      0.14730026 = sum of:
        0.059519455 = weight(_text_:wide in 1981) [ClassicSimilarity], result of:
          0.059519455 = score(doc=1981,freq=14.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.45331508 = fieldWeight in 1981, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1981)
        0.06795238 = weight(_text_:web in 1981) [ClassicSimilarity], result of:
          0.06795238 = score(doc=1981,freq=62.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.70264983 = fieldWeight in 1981, product of:
              7.8740077 = tf(freq=62.0), with freq of:
                62.0 = termFreq=62.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1981)
        0.009343155 = weight(_text_:information in 1981) [ClassicSimilarity], result of:
          0.009343155 = score(doc=1981,freq=14.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.1796046 = fieldWeight in 1981, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1981)
        0.010485282 = weight(_text_:retrieval in 1981) [ClassicSimilarity], result of:
          0.010485282 = score(doc=1981,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.11697317 = fieldWeight in 1981, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1981)
      0.2857143 = coord(4/14)
    
    Abstract
    As the World Wide Web continues to expand, it becomes increasingly difficult for users to obtain information efficiently. Because most search engines read format languages such as HTML or SGML, search results reflect formatting tags more than actual page content, which is expressed in natural language. Spinning the Semantic Web describes an exciting new type of hierarchy and standardization that will replace the current "Web of links" with a "Web of meaning." Using a flexible set of languages and tools, the Semantic Web will make all available information - display elements, metadata, services, images, and especially content - accessible. The result will be an immense repository of information accessible for a wide range of new applications. This first handbook for the Semantic Web covers, among other topics, software agents that can negotiate and collect information, markup languages that can tag many more types of information in a document, and knowledge systems that enable machines to read Web pages and determine their reliability. The truly interdisciplinary Semantic Web combines aspects of artificial intelligence, markup languages, natural language processing, information retrieval, knowledge representation, intelligent agents, and databases.
    Content
    Inhalt: Tim Bemers-Lee: The Original Dream - Re-enter Machines - Where Are We Now? - The World Wide Web Consortium - Where Is the Web Going Next? / Dieter Fensel, James Hendler, Henry Lieberman, and Wolfgang Wahlster: Why Is There a Need for the Semantic Web and What Will It Provide? - How the Semantic Web Will Be Possible / Jeff Heflin, James Hendler, and Sean Luke: SHOE: A Blueprint for the Semantic Web / Deborah L. McGuinness, Richard Fikes, Lynn Andrea Stein, and James Hendler: DAML-ONT: An Ontology Language for the Semantic Web / Michel Klein, Jeen Broekstra, Dieter Fensel, Frank van Harmelen, and Ian Horrocks: Ontologies and Schema Languages on the Web / Borys Omelayenko, Monica Crubezy, Dieter Fensel, Richard Benjamins, Bob Wielinga, Enrico Motta, Mark Musen, and Ying Ding: UPML: The Language and Tool Support for Making the Semantic Web Alive / Deborah L. McGuinness: Ontologies Come of Age / Jeen Broekstra, Arjohn Kampman, and Frank van Harmelen: Sesame: An Architecture for Storing and Querying RDF Data and Schema Information / Rob Jasper and Mike Uschold: Enabling Task-Centered Knowledge Support through Semantic Markup / Yolanda Gil: Knowledge Mobility: Semantics for the Web as a White Knight for Knowledge-Based Systems / Sanjeev Thacker, Amit Sheth, and Shuchi Patel: Complex Relationships for the Semantic Web / Alexander Maedche, Steffen Staab, Nenad Stojanovic, Rudi Studer, and York Sure: SEmantic portAL: The SEAL Approach / Ora Lassila and Mark Adler: Semantic Gadgets: Ubiquitous Computing Meets the Semantic Web / Christopher Frye, Mike Plusch, and Henry Lieberman: Static and Dynamic Semantics of the Web / Masahiro Hori: Semantic Annotation for Web Content Adaptation / Austin Tate, Jeff Dalton, John Levine, and Alex Nixon: Task-Achieving Agents on the World Wide Web
    LCSH
    Semantic Web
    World Wide Web
    RSWK
    Semantic Web
    Subject
    Semantic Web
    Semantic Web
    World Wide Web
    Theme
    Semantic Web
  3. Towards the Semantic Web : ontology-driven knowledge management (2004) 0.03
    0.027659249 = product of:
      0.09680737 = sum of:
        0.033398256 = weight(_text_:wide in 4401) [ClassicSimilarity], result of:
          0.033398256 = score(doc=4401,freq=6.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.2543695 = fieldWeight in 4401, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0234375 = fieldNorm(doc=4401)
        0.04438265 = weight(_text_:web in 4401) [ClassicSimilarity], result of:
          0.04438265 = score(doc=4401,freq=36.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.45893115 = fieldWeight in 4401, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0234375 = fieldNorm(doc=4401)
        0.010039084 = weight(_text_:information in 4401) [ClassicSimilarity], result of:
          0.010039084 = score(doc=4401,freq=22.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.19298252 = fieldWeight in 4401, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=4401)
        0.008987385 = weight(_text_:retrieval in 4401) [ClassicSimilarity], result of:
          0.008987385 = score(doc=4401,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.10026272 = fieldWeight in 4401, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0234375 = fieldNorm(doc=4401)
      0.2857143 = coord(4/14)
    
    Abstract
    With the current changes driven by the expansion of the World Wide Web, this book uses a different approach from other books on the market: it applies ontologies to electronically available information to improve the quality of knowledge management in large and distributed organizations. Ontologies are formal theories supporting knowledge sharing and reuse. They can be used to explicitly represent semantics of semi-structured information. These enable sophisticated automatic support for acquiring, maintaining and accessing information. Methodology and tools are developed for intelligent access to large volumes of semi-structured and textual information sources in intra- and extra-, and internet-based environments to employ the full power of ontologies in supporting knowledge management from the information client perspective and the information provider. The aim of the book is to support efficient and effective knowledge management and focuses on weakly-structured online information sources. It is aimed primarily at researchers in the area of knowledge management and information retrieval and will also be a useful reference for students in computer science at the postgraduate level and for business managers who are aiming to increase the corporations' information infrastructure. The Semantic Web is a very important initiative affecting the future of the WWW that is currently generating huge interest. The book covers several highly significant contributions to the semantic web research effort, including a new language for defining ontologies, several novel software tools and a coherent methodology for the application of the tools for business advantage. It also provides 3 case studies which give examples of the real benefits to be derived from the adoption of semantic-web based ontologies in "real world" situations. As such, the book is an excellent mixture of theory, tools and applications in an important area of WWW research. * Provides guidelines for introducing knowledge management concepts and tools into enterprises, to help knowledge providers present their knowledge efficiently and effectively. * Introduces an intelligent search tool that supports users in accessing information and a tool environment for maintenance, conversion and acquisition of information sources. * Discusses three large case studies which will help to develop the technology according to the actual needs of large and or virtual organisations and will provide a testbed for evaluating tools and methods. The book is aimed at people with at least a good understanding of existing WWW technology and some level of technical understanding of the underpinning technologies (XML/RDF). It will be of interest to graduate students, academic and industrial researchers in the field, and the many industrial personnel who are tracking WWW technology developments in order to understand the business implications. It could also be used to support undergraduate courses in the area but is not itself an introductory text.
    Content
    Inhalt: OIL and DAML + OIL: Ontology Languages for the Semantic Web (pages 11-31) / Dieter Fensel, Frank van Harmelen and Ian Horrocks A Methodology for Ontology-Based Knowledge Management (pages 33-46) / York Sure and Rudi Studer Ontology Management: Storing, Aligning and Maintaining Ontologies (pages 47-69) / Michel Klein, Ying Ding, Dieter Fensel and Borys Omelayenko Sesame: A Generic Architecture for Storing and Querying RDF and RDF Schema (pages 71-89) / Jeen Broekstra, Arjohn Kampman and Frank van Harmelen Generating Ontologies for the Semantic Web: OntoBuilder (pages 91-115) / R. H. P. Engels and T. Ch. Lech OntoEdit: Collaborative Engineering of Ontologies (pages 117-132) / York Sure, Michael Erdmann and Rudi Studer QuizRDF: Search Technology for the Semantic Web (pages 133-144) / John Davies, Richard Weeks and Uwe Krohn Spectacle (pages 145-159) / Christiaan Fluit, Herko ter Horst, Jos van der Meer, Marta Sabou and Peter Mika OntoShare: Evolving Ontologies in a Knowledge Sharing System (pages 161-177) / John Davies, Alistair Duke and Audrius Stonkus Ontology Middleware and Reasoning (pages 179-196) / Atanas Kiryakov, Kiril Simov and Damyan Ognyanov Ontology-Based Knowledge Management at Work: The Swiss Life Case Studies (pages 197-218) / Ulrich Reimer, Peter Brockhausen, Thorsten Lau and Jacqueline R. Reich Field Experimenting with Semantic Web Tools in a Virtual Organization (pages 219-244) / Victor Iosif, Peter Mika, Rikard Larsson and Hans Akkermans A Future Perspective: Exploiting Peer-To-Peer and the Semantic Web for Knowledge Management (pages 245-264) / Dieter Fensel, Steffen Staab, Rudi Studer, Frank van Harmelen and John Davies Conclusions: Ontology-driven Knowledge Management - Towards the Semantic Web? (pages 265-266) / John Davies, Dieter Fensel and Frank van Harmelen
    LCSH
    Semantic web
    RSWK
    Semantic Web / Wissensmanagement / Wissenserwerb
    Wissensmanagement / World Wide web (BVB)
    Subject
    Semantic Web / Wissensmanagement / Wissenserwerb
    Wissensmanagement / World Wide web (BVB)
    Semantic web
    Theme
    Semantic Web
  4. Kuhlthau, C.C: Seeking meaning : a process approach to library and information services (2004) 0.02
    0.016333263 = product of:
      0.057166416 = sum of:
        0.016068742 = weight(_text_:wide in 3347) [ClassicSimilarity], result of:
          0.016068742 = score(doc=3347,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.122383565 = fieldWeight in 3347, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3347)
        0.019510988 = weight(_text_:bibliothek in 3347) [ClassicSimilarity], result of:
          0.019510988 = score(doc=3347,freq=4.0), product of:
            0.121660605 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.029633347 = queryNorm
            0.16037227 = fieldWeight in 3347, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3347)
        0.01099495 = weight(_text_:information in 3347) [ClassicSimilarity], result of:
          0.01099495 = score(doc=3347,freq=38.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.21135727 = fieldWeight in 3347, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3347)
        0.010591734 = weight(_text_:retrieval in 3347) [ClassicSimilarity], result of:
          0.010591734 = score(doc=3347,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.11816074 = fieldWeight in 3347, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3347)
      0.2857143 = coord(4/14)
    
    Footnote
    Rez. in: Information Research, 9(3), review no. R129 (T.D. Wilson): "The first edition of this book was published ten years ago and rapidly become something of a classic in the field of information seeking behaviour. It is good to see the second edition which incorporates not only the work the author has done since 1993, but also related work by other researchers. Kuhlthau is one of the most cited authors in the field and her model of the information search process, involving stages in the search and associated feelings, has been used by others in a variety of contexts. However, what makes this book different (as was the case with the first edition) is the author's dedication to the field of practice and the book's sub-title demonstrates her commitment to the transfer of research. In Kuhlthau's case this is the practice of the school library media specialist, but her research has covered students of various ages as well as a wide range of occupational groups. Because the information search model is so well known, I shall concentrate in this review on the relationship between the research findings and practice. It is necessary, however, to begin with the search process model, because this is central. Briefly, the model proposes that the searcher goes through the stages of initiation, selection, exploration, formulation, collection and presentation, and, at each stage, experiences various feelings ranging from optimism and satisfaction to confusion and disappointment. Personally, I occasionally suffer despair, but perhaps that is too extreme for most!
    It is important to understand the origins of Kuhlthau's ideas in the work of the educational theorists, Dewey, Kelly and Bruner. Putting the matter in a rather simplistic manner, Dewey identified stages of cognition, Kelly attached the idea of feelings being associated with cognitive stages, and Bruner added the notion of actions associated with both. We can see this framework underlying Kuhlthau's research in her description of the actions undertaken at different stages in the search process and the associated feelings. Central to the transfer of these ideas to practice is the notion of the 'Zone of Intervention' or the point at which an information seeker can proceed more effectively with assistance than without. Kuhlthau identifies five intervention zones, the first of which involves intervention by the information seeker him/herself. The remaining four involve interventions of different kinds, which the author distinguishes according to the level of mediation required: zone 2 involves the librarian as 'locater', i.e., providing the quick reference response; zone 3, as 'identifier', i.e., discovering potentially useful information resources, but taking no further interest in the user; zone 4 as 'advisor', i.e., not only identifying possibly helpful resources, but guiding the user through them, and zone 5 as 'counsellor', which might be seen as a more intensive version of the advisor, guiding not simply on the sources, but also on the overall process, through a continuing interaction with the user. Clearly, these processes can be used in workshops, conference presentations and the classroom to sensitise the practioner and the student to the range of helping strategies that ought to be made available to the information seeker. However, the author goes further, identifying a further set of strategies for intervening in the search process, which she describes as 'collaborating', 'continuing', 'choosing', 'charting', 'conversing' and 'composing'. 'Collaboration' clearly involves the participation of others - fellow students, work peers, fellow researchers, or whatever, in the search process; 'continuing' intervention is associated with information seeking that involves a succession of actions - the intermediary 'stays with' the searcher throughout the process, available as needed to support him/her; 'choosing', that is, enabling the information seeker to identify the available choices in any given situation; 'charting' involves presenting a graphic illustration of the overall process and locating the information seeker in that chart; 'conversing' is the encouragement of discussion about the problem(s), and 'composing' involves the librarian as counsellor in encouraging the information seeker to document his/her experience, perhaps by keeping a diary of the process.
    Together with the zones of intervention, these ideas, and others set out in the book, provide a very powerful didactic mechanism for improving library and information service delivery. Of course, other things are necessary - the motivation to work in this way, and the availability resources to enable its accomplishment. Sadly, at least in the UK, many libraries today are too financially pressed to do much more than the minimum helpful intervention in the information seeking process. However, that should not serve as a stick with which to beat the author: not only has she performed work of genuine significance in the field of human information behaviour, she has demonstrated beyond question that the ideas that have emerged from her research have the capability to help to deliver more effective services." Auch unter: http://informationr.net/ir/reviews/revs129.html
    LCSH
    Information retrieval
    RSWK
    USA / Bibliothek / Informationsmanagement
    Subject
    USA / Bibliothek / Informationsmanagement
    Information retrieval
    Theme
    Information
  5. Jacquemin, C.: Spotting and discovering terms through natural language processing (2001) 0.01
    0.013330658 = product of:
      0.062209737 = sum of:
        0.017435152 = weight(_text_:web in 119) [ClassicSimilarity], result of:
          0.017435152 = score(doc=119,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.18028519 = fieldWeight in 119, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=119)
        0.011280581 = weight(_text_:information in 119) [ClassicSimilarity], result of:
          0.011280581 = score(doc=119,freq=10.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.21684799 = fieldWeight in 119, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=119)
        0.033494003 = weight(_text_:retrieval in 119) [ClassicSimilarity], result of:
          0.033494003 = score(doc=119,freq=10.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.37365708 = fieldWeight in 119, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=119)
      0.21428572 = coord(3/14)
    
    Abstract
    In this book Christian Jacquemin shows how the power of natural language processing (NLP) can be used to advance text indexing and information retrieval (IR). Jacquemin's novel tool is FASTR, a parser that normalizes terms and recognizes term variants. Since there are more meanings in a language than there are words, FASTR uses a metagrammar composed of shallow linguistic transformations that describe the morphological, syntactic, semantic, and pragmatic variations of words and terms. The acquired parsed terms can then be applied for precise retrieval and assembly of information. The use of a corpus-based unification grammar to define, recognize, and combine term variants from their base forms allows for intelligent information access to, or "linguistic data tuning" of, heterogeneous texts. FASTR can be used to do automatic controlled indexing, to carry out content-based Web searches through conceptually related alternative query formulations, to abstract scientific and technical extracts, and even to translate and collect terms from multilingual material. Jacquemin provides a comprehensive account of the method and implementation of this innovative retrieval technique for text processing.
    RSWK
    Automatische Indexierung  / Computerlinguistik  / Information Retrieval
    Subject
    Automatische Indexierung  / Computerlinguistik  / Information Retrieval
  6. Kuropka, D.: Modelle zur Repräsentation natürlichsprachlicher Dokumente : Ontologie-basiertes Information-Filtering und -Retrieval mit relationalen Datenbanken (2004) 0.01
    0.0065501803 = product of:
      0.04585126 = sum of:
        0.012357258 = weight(_text_:information in 4325) [ClassicSimilarity], result of:
          0.012357258 = score(doc=4325,freq=12.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23754507 = fieldWeight in 4325, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4325)
        0.033494003 = weight(_text_:retrieval in 4325) [ClassicSimilarity], result of:
          0.033494003 = score(doc=4325,freq=10.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.37365708 = fieldWeight in 4325, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4325)
      0.14285715 = coord(2/14)
    
    Abstract
    Kostengünstige Massenspeicher und die zunehmende Vernetzung von Rechnern haben die Anzahl der Dokumente, auf die ein einzelnes Individuum zugreifen kann (bspw. Webseiten) oder die auf das Individuum einströmen (bspw. E-Mails), in den letzten Jahren rapide ansteigen lassen. In immer mehr Bereichen der Wirtschaft, Wissenschaft und Verwaltung nimmt der Bedarf an hochwertigen Information-Filtering und -Retrieval Werkzeugen zur Beherrschung der Informationsflut zu. Zur computergestützten Lösung dieser Problemstellung sind Modelle zur Repräsentation natürlichsprachlicher Dokumente erforderlich, um formale Kriterien für die automatisierte Auswahl relevanter Dokumente definieren zu können. Dominik Kuropka gibt in seiner Arbeit eine umfassende Übersicht über den Themenbereich der Suche und Filterung von natürlichsprachlichen Dokumenten. Es wird eine Vielzahl von Modellen aus Forschung und Praxis vorgestellt und evaluiert. Auf den Ergebnissen aufbauend wird das Potenzial von Ontologien in diesem Zusammenhang eruiert und es wird ein neues, ontologie-basiertes Modell für das Information-Filtering und -Retrieval erarbeitet, welches anhand von Text- und Code-Beispielen ausführlich erläutert wird. Das Buch richtet sich an Dozenten und Studenten der Informatik, Wirtschaftsinformatik und (Computer-)Linguistik sowie an Systemdesigner und Entwickler von dokumentenorientierten Anwendungssystemen und Werkzeugen.
    RSWK
    Natürlichsprachiges System / Dokumentverarbeitung / Wissensrepräsentation / Benutzermodell / Information Retrieval / Relationales Datenmodell
    Series
    Advances in information systems and management science; 10
    Subject
    Natürlichsprachiges System / Dokumentverarbeitung / Wissensrepräsentation / Benutzermodell / Information Retrieval / Relationales Datenmodell
  7. Kuropka, D.: Modelle zur Repräsentation natürlichsprachlicher Dokumente : Ontologie-basiertes Information-Filtering und -Retrieval mit relationalen Datenbanken (2004) 0.01
    0.0065501803 = product of:
      0.04585126 = sum of:
        0.012357258 = weight(_text_:information in 4385) [ClassicSimilarity], result of:
          0.012357258 = score(doc=4385,freq=12.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23754507 = fieldWeight in 4385, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4385)
        0.033494003 = weight(_text_:retrieval in 4385) [ClassicSimilarity], result of:
          0.033494003 = score(doc=4385,freq=10.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.37365708 = fieldWeight in 4385, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4385)
      0.14285715 = coord(2/14)
    
    Abstract
    Kostengünstige Massenspeicher und die zunehmende Vernetzung von Rechnern haben die Anzahl der Dokumente, auf die ein einzelnes Individuum zugreifen kann (bspw. Webseiten) oder die auf das Individuum einströmen (bspw. E-Mails), in den letzten Jahren rapide ansteigen lassen. In immer mehr Bereichen der Wirtschaft, Wissenschaft und Verwaltung nimmt der Bedarf an hochwertigen Information-Filtering und -Retrieval Werkzeugen zur Beherrschung der Informationsflut zu. Zur computergestützten Lösung dieser Problemstellung sind Modelle zur Repräsentation natürlichsprachlicher Dokumente erforderlich, um formale Kriterien für die automatisierte Auswahl relevanter Dokumente definieren zu können. Dominik Kuropka gibt in seiner Arbeit eine umfassende Übersicht über den Themenbereich der Suche und Filterung von natürlichsprachlichen Dokumenten. Es wird eine Vielzahl von Modellen aus Forschung und Praxis vorgestellt und evaluiert. Auf den Ergebnissen aufbauend wird das Potenzial von Ontologien in diesem Zusammenhang eruiert und es wird ein neues, ontologie-basiertes Modell für das Information-Filtering und -Retrieval erarbeitet, welches anhand von Text- und Code-Beispielen ausführlich erläutert wird. Das Buch richtet sich an Dozenten und Studenten der Informatik, Wirtschaftsinformatik und (Computer-)Linguistik sowie an Systemdesigner und Entwickler von dokumentenorientierten Anwendungssystemen und Werkzeugen.
    RSWK
    Natürlichsprachiges System / Dokumentverarbeitung / Wissensrepräsentation / Benutzermodell / Information Retrieval / Relationales Datenmodell
    Series
    Advances in information systems and management science; 10
    Subject
    Natürlichsprachiges System / Dokumentverarbeitung / Wissensrepräsentation / Benutzermodell / Information Retrieval / Relationales Datenmodell
  8. Jurafsky, D.; Martin, J.H.: Speech and language processing : ani ntroduction to natural language processing, computational linguistics and speech recognition (2009) 0.00
    0.0017612164 = product of:
      0.02465703 = sum of:
        0.02465703 = weight(_text_:web in 1081) [ClassicSimilarity], result of:
          0.02465703 = score(doc=1081,freq=4.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.25496176 = fieldWeight in 1081, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1081)
      0.071428575 = coord(1/14)
    
    Abstract
    For undergraduate or advanced undergraduate courses in Classical Natural Language Processing, Statistical Natural Language Processing, Speech Recognition, Computational Linguistics, and Human Language Processing. An explosion of Web-based language techniques, merging of distinct fields, availability of phone-based dialogue systems, and much more make this an exciting time in speech and language processing. The first of its kind to thoroughly cover language technology at all levels and with all modern technologies this text takes an empirical approach to the subject, based on applying statistical and other machine-learning algorithms to large corporations. The authors cover areas that traditionally are taught in different courses, to describe a unified vision of speech and language processing. Emphasis is on practical applications and scientific evaluation. An accompanying Website contains teaching materials for instructors, with pointers to language processing resources on the Web. The Second Edition offers a significant amount of new and extended material.
  9. Tufte, E.R.: Envisioning information (1990) 0.00
    0.0012230515 = product of:
      0.01712272 = sum of:
        0.01712272 = weight(_text_:information in 3733) [ClassicSimilarity], result of:
          0.01712272 = score(doc=3733,freq=16.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.3291521 = fieldWeight in 3733, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3733)
      0.071428575 = coord(1/14)
    
    Classification
    Kun H 70 Information
    Pub A 91 / Information
    Content
    Inhalt: Escaping flatland - Micro/macro readings -- Layering and separation - Small multiples - Color and information - Narratives of space and time.
    RSWK
    Information / Visualisierung / Gebrauchsgrafik
    SBB
    Kun H 70 Information
    Pub A 91 / Information
    Subject
    Information / Visualisierung / Gebrauchsgrafik
  10. Hutchins, W.J.; Somers, H.L.: ¬An introduction to machine translation (1992) 0.00
    5.0960475E-4 = product of:
      0.0071344664 = sum of:
        0.0071344664 = weight(_text_:information in 4512) [ClassicSimilarity], result of:
          0.0071344664 = score(doc=4512,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.13714671 = fieldWeight in 4512, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4512)
      0.071428575 = coord(1/14)
    
    Abstract
    The translation of foreign language texts by computers was one of the first tasks that the pioneers of Computing and Artificial Intelligence set themselves. Machine translation is again becoming an importantfield of research and development as the need for translations of technical and commercial documentation is growing well beyond the capacity of the translation profession.This is the first textbook of machine translation, providing a full course on both general machine translation systems characteristics and the computational linguistic foundations of the field. The book assumes no previous knowledge of machine translation and provides the basic background information to the linguistic and computational linguistics, artificial intelligence, natural language processing and information science.

Languages

Types

  • m 10
  • s 2

Subjects

Classifications