Search (8 results, page 1 of 1)

  • × classification_ss:"TVV (DU)"
  1. Dreyfus, H.L.: ¬Die Grenzen künstlicher Intelligenz : was Computer nicht können (1985) 0.01
    0.0073762485 = product of:
      0.059009988 = sum of:
        0.059009988 = weight(_text_:computer in 4332) [ClassicSimilarity], result of:
          0.059009988 = score(doc=4332,freq=8.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.40377006 = fieldWeight in 4332, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4332)
      0.125 = coord(1/8)
    
    Footnote
    HST und ZST werden in verschiedenen Katalogen auch in vertauschter Reihenfolge angegeben (vgl. die Gestaltung des Covers und Titelblatts). Titel des Original: What computer can't do: the limits of artificial intelligence.
    RSWK
    Computer / Intelligenz / EDV (SBB)
    Subject
    Computer / Intelligenz / EDV (SBB)
  2. Computerlinguistik und Sprachtechnologie : Eine Einführung (2010) 0.01
    0.007227218 = product of:
      0.057817742 = sum of:
        0.057817742 = weight(_text_:computer in 1735) [ClassicSimilarity], result of:
          0.057817742 = score(doc=1735,freq=12.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.39561224 = fieldWeight in 1735, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.03125 = fieldNorm(doc=1735)
      0.125 = coord(1/8)
    
    LCSH
    Computer science
    Translators (Computer programs)
    Computer science
    Subject
    Computer science
    Translators (Computer programs)
    Computer science
  3. Kuropka, D.: Modelle zur Repräsentation natürlichsprachlicher Dokumente : Ontologie-basiertes Information-Filtering und -Retrieval mit relationalen Datenbanken (2004) 0.00
    0.0036881242 = product of:
      0.029504994 = sum of:
        0.029504994 = weight(_text_:computer in 4325) [ClassicSimilarity], result of:
          0.029504994 = score(doc=4325,freq=2.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.20188503 = fieldWeight in 4325, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4325)
      0.125 = coord(1/8)
    
    Abstract
    Kostengünstige Massenspeicher und die zunehmende Vernetzung von Rechnern haben die Anzahl der Dokumente, auf die ein einzelnes Individuum zugreifen kann (bspw. Webseiten) oder die auf das Individuum einströmen (bspw. E-Mails), in den letzten Jahren rapide ansteigen lassen. In immer mehr Bereichen der Wirtschaft, Wissenschaft und Verwaltung nimmt der Bedarf an hochwertigen Information-Filtering und -Retrieval Werkzeugen zur Beherrschung der Informationsflut zu. Zur computergestützten Lösung dieser Problemstellung sind Modelle zur Repräsentation natürlichsprachlicher Dokumente erforderlich, um formale Kriterien für die automatisierte Auswahl relevanter Dokumente definieren zu können. Dominik Kuropka gibt in seiner Arbeit eine umfassende Übersicht über den Themenbereich der Suche und Filterung von natürlichsprachlichen Dokumenten. Es wird eine Vielzahl von Modellen aus Forschung und Praxis vorgestellt und evaluiert. Auf den Ergebnissen aufbauend wird das Potenzial von Ontologien in diesem Zusammenhang eruiert und es wird ein neues, ontologie-basiertes Modell für das Information-Filtering und -Retrieval erarbeitet, welches anhand von Text- und Code-Beispielen ausführlich erläutert wird. Das Buch richtet sich an Dozenten und Studenten der Informatik, Wirtschaftsinformatik und (Computer-)Linguistik sowie an Systemdesigner und Entwickler von dokumentenorientierten Anwendungssystemen und Werkzeugen.
  4. Kuropka, D.: Modelle zur Repräsentation natürlichsprachlicher Dokumente : Ontologie-basiertes Information-Filtering und -Retrieval mit relationalen Datenbanken (2004) 0.00
    0.0036881242 = product of:
      0.029504994 = sum of:
        0.029504994 = weight(_text_:computer in 4385) [ClassicSimilarity], result of:
          0.029504994 = score(doc=4385,freq=2.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.20188503 = fieldWeight in 4385, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4385)
      0.125 = coord(1/8)
    
    Abstract
    Kostengünstige Massenspeicher und die zunehmende Vernetzung von Rechnern haben die Anzahl der Dokumente, auf die ein einzelnes Individuum zugreifen kann (bspw. Webseiten) oder die auf das Individuum einströmen (bspw. E-Mails), in den letzten Jahren rapide ansteigen lassen. In immer mehr Bereichen der Wirtschaft, Wissenschaft und Verwaltung nimmt der Bedarf an hochwertigen Information-Filtering und -Retrieval Werkzeugen zur Beherrschung der Informationsflut zu. Zur computergestützten Lösung dieser Problemstellung sind Modelle zur Repräsentation natürlichsprachlicher Dokumente erforderlich, um formale Kriterien für die automatisierte Auswahl relevanter Dokumente definieren zu können. Dominik Kuropka gibt in seiner Arbeit eine umfassende Übersicht über den Themenbereich der Suche und Filterung von natürlichsprachlichen Dokumenten. Es wird eine Vielzahl von Modellen aus Forschung und Praxis vorgestellt und evaluiert. Auf den Ergebnissen aufbauend wird das Potenzial von Ontologien in diesem Zusammenhang eruiert und es wird ein neues, ontologie-basiertes Modell für das Information-Filtering und -Retrieval erarbeitet, welches anhand von Text- und Code-Beispielen ausführlich erläutert wird. Das Buch richtet sich an Dozenten und Studenten der Informatik, Wirtschaftsinformatik und (Computer-)Linguistik sowie an Systemdesigner und Entwickler von dokumentenorientierten Anwendungssystemen und Werkzeugen.
  5. Towards the Semantic Web : ontology-driven knowledge management (2004) 0.00
    0.0022128746 = product of:
      0.017702997 = sum of:
        0.017702997 = weight(_text_:computer in 4401) [ClassicSimilarity], result of:
          0.017702997 = score(doc=4401,freq=2.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.12113102 = fieldWeight in 4401, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0234375 = fieldNorm(doc=4401)
      0.125 = coord(1/8)
    
    Abstract
    With the current changes driven by the expansion of the World Wide Web, this book uses a different approach from other books on the market: it applies ontologies to electronically available information to improve the quality of knowledge management in large and distributed organizations. Ontologies are formal theories supporting knowledge sharing and reuse. They can be used to explicitly represent semantics of semi-structured information. These enable sophisticated automatic support for acquiring, maintaining and accessing information. Methodology and tools are developed for intelligent access to large volumes of semi-structured and textual information sources in intra- and extra-, and internet-based environments to employ the full power of ontologies in supporting knowledge management from the information client perspective and the information provider. The aim of the book is to support efficient and effective knowledge management and focuses on weakly-structured online information sources. It is aimed primarily at researchers in the area of knowledge management and information retrieval and will also be a useful reference for students in computer science at the postgraduate level and for business managers who are aiming to increase the corporations' information infrastructure. The Semantic Web is a very important initiative affecting the future of the WWW that is currently generating huge interest. The book covers several highly significant contributions to the semantic web research effort, including a new language for defining ontologies, several novel software tools and a coherent methodology for the application of the tools for business advantage. It also provides 3 case studies which give examples of the real benefits to be derived from the adoption of semantic-web based ontologies in "real world" situations. As such, the book is an excellent mixture of theory, tools and applications in an important area of WWW research. * Provides guidelines for introducing knowledge management concepts and tools into enterprises, to help knowledge providers present their knowledge efficiently and effectively. * Introduces an intelligent search tool that supports users in accessing information and a tool environment for maintenance, conversion and acquisition of information sources. * Discusses three large case studies which will help to develop the technology according to the actual needs of large and or virtual organisations and will provide a testbed for evaluating tools and methods. The book is aimed at people with at least a good understanding of existing WWW technology and some level of technical understanding of the underpinning technologies (XML/RDF). It will be of interest to graduate students, academic and industrial researchers in the field, and the many industrial personnel who are tracking WWW technology developments in order to understand the business implications. It could also be used to support undergraduate courses in the area but is not itself an introductory text.
  6. Jurafsky, D.; Martin, J.H.: Speech and language processing : ani ntroduction to natural language processing, computational linguistics and speech recognition (2009) 0.00
    0.0018398546 = product of:
      0.014718837 = sum of:
        0.014718837 = product of:
          0.029437674 = sum of:
            0.029437674 = weight(_text_:resources in 1081) [ClassicSimilarity], result of:
              0.029437674 = score(doc=1081,freq=2.0), product of:
                0.14598069 = queryWeight, product of:
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.039991006 = queryNorm
                0.20165458 = fieldWeight in 1081, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1081)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Abstract
    For undergraduate or advanced undergraduate courses in Classical Natural Language Processing, Statistical Natural Language Processing, Speech Recognition, Computational Linguistics, and Human Language Processing. An explosion of Web-based language techniques, merging of distinct fields, availability of phone-based dialogue systems, and much more make this an exciting time in speech and language processing. The first of its kind to thoroughly cover language technology at all levels and with all modern technologies this text takes an empirical approach to the subject, based on applying statistical and other machine-learning algorithms to large corporations. The authors cover areas that traditionally are taught in different courses, to describe a unified vision of speech and language processing. Emphasis is on practical applications and scientific evaluation. An accompanying Website contains teaching materials for instructors, with pointers to language processing resources on the Web. The Second Edition offers a significant amount of new and extended material.
  7. Schweibenz, W.; Thissen, F.: Qualität im Web : Benutzerfreundliche Webseiten durch Usability Evaluation (2003) 0.00
    0.0016931967 = product of:
      0.013545574 = sum of:
        0.013545574 = product of:
          0.027091147 = sum of:
            0.027091147 = weight(_text_:22 in 767) [ClassicSimilarity], result of:
              0.027091147 = score(doc=767,freq=2.0), product of:
                0.1400417 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039991006 = queryNorm
                0.19345059 = fieldWeight in 767, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=767)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Date
    22. 3.2008 14:24:08
  8. Kuhlthau, C.C: Seeking meaning : a process approach to library and information services (2004) 0.00
    0.0015933609 = product of:
      0.012746887 = sum of:
        0.012746887 = product of:
          0.025493775 = sum of:
            0.025493775 = weight(_text_:resources in 3347) [ClassicSimilarity], result of:
              0.025493775 = score(doc=3347,freq=6.0), product of:
                0.14598069 = queryWeight, product of:
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.039991006 = queryNorm
                0.174638 = fieldWeight in 3347, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3347)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Footnote
    It is important to understand the origins of Kuhlthau's ideas in the work of the educational theorists, Dewey, Kelly and Bruner. Putting the matter in a rather simplistic manner, Dewey identified stages of cognition, Kelly attached the idea of feelings being associated with cognitive stages, and Bruner added the notion of actions associated with both. We can see this framework underlying Kuhlthau's research in her description of the actions undertaken at different stages in the search process and the associated feelings. Central to the transfer of these ideas to practice is the notion of the 'Zone of Intervention' or the point at which an information seeker can proceed more effectively with assistance than without. Kuhlthau identifies five intervention zones, the first of which involves intervention by the information seeker him/herself. The remaining four involve interventions of different kinds, which the author distinguishes according to the level of mediation required: zone 2 involves the librarian as 'locater', i.e., providing the quick reference response; zone 3, as 'identifier', i.e., discovering potentially useful information resources, but taking no further interest in the user; zone 4 as 'advisor', i.e., not only identifying possibly helpful resources, but guiding the user through them, and zone 5 as 'counsellor', which might be seen as a more intensive version of the advisor, guiding not simply on the sources, but also on the overall process, through a continuing interaction with the user. Clearly, these processes can be used in workshops, conference presentations and the classroom to sensitise the practioner and the student to the range of helping strategies that ought to be made available to the information seeker. However, the author goes further, identifying a further set of strategies for intervening in the search process, which she describes as 'collaborating', 'continuing', 'choosing', 'charting', 'conversing' and 'composing'. 'Collaboration' clearly involves the participation of others - fellow students, work peers, fellow researchers, or whatever, in the search process; 'continuing' intervention is associated with information seeking that involves a succession of actions - the intermediary 'stays with' the searcher throughout the process, available as needed to support him/her; 'choosing', that is, enabling the information seeker to identify the available choices in any given situation; 'charting' involves presenting a graphic illustration of the overall process and locating the information seeker in that chart; 'conversing' is the encouragement of discussion about the problem(s), and 'composing' involves the librarian as counsellor in encouraging the information seeker to document his/her experience, perhaps by keeping a diary of the process.
    Together with the zones of intervention, these ideas, and others set out in the book, provide a very powerful didactic mechanism for improving library and information service delivery. Of course, other things are necessary - the motivation to work in this way, and the availability resources to enable its accomplishment. Sadly, at least in the UK, many libraries today are too financially pressed to do much more than the minimum helpful intervention in the information seeking process. However, that should not serve as a stick with which to beat the author: not only has she performed work of genuine significance in the field of human information behaviour, she has demonstrated beyond question that the ideas that have emerged from her research have the capability to help to deliver more effective services." Auch unter: http://informationr.net/ir/reviews/revs129.html

Languages

Types

Subjects