Search (8 results, page 1 of 1)

  • × classification_ss:"TVV (DU)"
  1. Tufte, E.R.: Envisioning information (1990) 0.01
    0.0050479556 = product of:
      0.020191822 = sum of:
        0.020191822 = weight(_text_:information in 3733) [ClassicSimilarity], result of:
          0.020191822 = score(doc=3733,freq=16.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.3291521 = fieldWeight in 3733, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3733)
      0.25 = coord(1/4)
    
    Classification
    Kun H 70 Information
    Pub A 91 / Information
    Content
    Inhalt: Escaping flatland - Micro/macro readings -- Layering and separation - Small multiples - Color and information - Narratives of space and time.
    RSWK
    Information / Visualisierung / Gebrauchsgrafik
    SBB
    Kun H 70 Information
    Pub A 91 / Information
    Subject
    Information / Visualisierung / Gebrauchsgrafik
  2. Kuropka, D.: Modelle zur Repräsentation natürlichsprachlicher Dokumente : Ontologie-basiertes Information-Filtering und -Retrieval mit relationalen Datenbanken (2004) 0.00
    0.0036430482 = product of:
      0.014572193 = sum of:
        0.014572193 = weight(_text_:information in 4325) [ClassicSimilarity], result of:
          0.014572193 = score(doc=4325,freq=12.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.23754507 = fieldWeight in 4325, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4325)
      0.25 = coord(1/4)
    
    Abstract
    Kostengünstige Massenspeicher und die zunehmende Vernetzung von Rechnern haben die Anzahl der Dokumente, auf die ein einzelnes Individuum zugreifen kann (bspw. Webseiten) oder die auf das Individuum einströmen (bspw. E-Mails), in den letzten Jahren rapide ansteigen lassen. In immer mehr Bereichen der Wirtschaft, Wissenschaft und Verwaltung nimmt der Bedarf an hochwertigen Information-Filtering und -Retrieval Werkzeugen zur Beherrschung der Informationsflut zu. Zur computergestützten Lösung dieser Problemstellung sind Modelle zur Repräsentation natürlichsprachlicher Dokumente erforderlich, um formale Kriterien für die automatisierte Auswahl relevanter Dokumente definieren zu können. Dominik Kuropka gibt in seiner Arbeit eine umfassende Übersicht über den Themenbereich der Suche und Filterung von natürlichsprachlichen Dokumenten. Es wird eine Vielzahl von Modellen aus Forschung und Praxis vorgestellt und evaluiert. Auf den Ergebnissen aufbauend wird das Potenzial von Ontologien in diesem Zusammenhang eruiert und es wird ein neues, ontologie-basiertes Modell für das Information-Filtering und -Retrieval erarbeitet, welches anhand von Text- und Code-Beispielen ausführlich erläutert wird. Das Buch richtet sich an Dozenten und Studenten der Informatik, Wirtschaftsinformatik und (Computer-)Linguistik sowie an Systemdesigner und Entwickler von dokumentenorientierten Anwendungssystemen und Werkzeugen.
    RSWK
    Natürlichsprachiges System / Dokumentverarbeitung / Wissensrepräsentation / Benutzermodell / Information Retrieval / Relationales Datenmodell
    Series
    Advances in information systems and management science; 10
    Subject
    Natürlichsprachiges System / Dokumentverarbeitung / Wissensrepräsentation / Benutzermodell / Information Retrieval / Relationales Datenmodell
  3. Kuropka, D.: Modelle zur Repräsentation natürlichsprachlicher Dokumente : Ontologie-basiertes Information-Filtering und -Retrieval mit relationalen Datenbanken (2004) 0.00
    0.0036430482 = product of:
      0.014572193 = sum of:
        0.014572193 = weight(_text_:information in 4385) [ClassicSimilarity], result of:
          0.014572193 = score(doc=4385,freq=12.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.23754507 = fieldWeight in 4385, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4385)
      0.25 = coord(1/4)
    
    Abstract
    Kostengünstige Massenspeicher und die zunehmende Vernetzung von Rechnern haben die Anzahl der Dokumente, auf die ein einzelnes Individuum zugreifen kann (bspw. Webseiten) oder die auf das Individuum einströmen (bspw. E-Mails), in den letzten Jahren rapide ansteigen lassen. In immer mehr Bereichen der Wirtschaft, Wissenschaft und Verwaltung nimmt der Bedarf an hochwertigen Information-Filtering und -Retrieval Werkzeugen zur Beherrschung der Informationsflut zu. Zur computergestützten Lösung dieser Problemstellung sind Modelle zur Repräsentation natürlichsprachlicher Dokumente erforderlich, um formale Kriterien für die automatisierte Auswahl relevanter Dokumente definieren zu können. Dominik Kuropka gibt in seiner Arbeit eine umfassende Übersicht über den Themenbereich der Suche und Filterung von natürlichsprachlichen Dokumenten. Es wird eine Vielzahl von Modellen aus Forschung und Praxis vorgestellt und evaluiert. Auf den Ergebnissen aufbauend wird das Potenzial von Ontologien in diesem Zusammenhang eruiert und es wird ein neues, ontologie-basiertes Modell für das Information-Filtering und -Retrieval erarbeitet, welches anhand von Text- und Code-Beispielen ausführlich erläutert wird. Das Buch richtet sich an Dozenten und Studenten der Informatik, Wirtschaftsinformatik und (Computer-)Linguistik sowie an Systemdesigner und Entwickler von dokumentenorientierten Anwendungssystemen und Werkzeugen.
    RSWK
    Natürlichsprachiges System / Dokumentverarbeitung / Wissensrepräsentation / Benutzermodell / Information Retrieval / Relationales Datenmodell
    Series
    Advances in information systems and management science; 10
    Subject
    Natürlichsprachiges System / Dokumentverarbeitung / Wissensrepräsentation / Benutzermodell / Information Retrieval / Relationales Datenmodell
  4. Jacquemin, C.: Spotting and discovering terms through natural language processing (2001) 0.00
    0.0033256328 = product of:
      0.013302531 = sum of:
        0.013302531 = weight(_text_:information in 119) [ClassicSimilarity], result of:
          0.013302531 = score(doc=119,freq=10.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.21684799 = fieldWeight in 119, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=119)
      0.25 = coord(1/4)
    
    Abstract
    In this book Christian Jacquemin shows how the power of natural language processing (NLP) can be used to advance text indexing and information retrieval (IR). Jacquemin's novel tool is FASTR, a parser that normalizes terms and recognizes term variants. Since there are more meanings in a language than there are words, FASTR uses a metagrammar composed of shallow linguistic transformations that describe the morphological, syntactic, semantic, and pragmatic variations of words and terms. The acquired parsed terms can then be applied for precise retrieval and assembly of information. The use of a corpus-based unification grammar to define, recognize, and combine term variants from their base forms allows for intelligent information access to, or "linguistic data tuning" of, heterogeneous texts. FASTR can be used to do automatic controlled indexing, to carry out content-based Web searches through conceptually related alternative query formulations, to abstract scientific and technical extracts, and even to translate and collect terms from multilingual material. Jacquemin provides a comprehensive account of the method and implementation of this innovative retrieval technique for text processing.
    RSWK
    Automatische Indexierung  / Computerlinguistik  / Information Retrieval
    Subject
    Automatische Indexierung  / Computerlinguistik  / Information Retrieval
  5. Kuhlthau, C.C: Seeking meaning : a process approach to library and information services (2004) 0.00
    0.0032414256 = product of:
      0.012965702 = sum of:
        0.012965702 = weight(_text_:information in 3347) [ClassicSimilarity], result of:
          0.012965702 = score(doc=3347,freq=38.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.21135727 = fieldWeight in 3347, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3347)
      0.25 = coord(1/4)
    
    Footnote
    Rez. in: Information Research, 9(3), review no. R129 (T.D. Wilson): "The first edition of this book was published ten years ago and rapidly become something of a classic in the field of information seeking behaviour. It is good to see the second edition which incorporates not only the work the author has done since 1993, but also related work by other researchers. Kuhlthau is one of the most cited authors in the field and her model of the information search process, involving stages in the search and associated feelings, has been used by others in a variety of contexts. However, what makes this book different (as was the case with the first edition) is the author's dedication to the field of practice and the book's sub-title demonstrates her commitment to the transfer of research. In Kuhlthau's case this is the practice of the school library media specialist, but her research has covered students of various ages as well as a wide range of occupational groups. Because the information search model is so well known, I shall concentrate in this review on the relationship between the research findings and practice. It is necessary, however, to begin with the search process model, because this is central. Briefly, the model proposes that the searcher goes through the stages of initiation, selection, exploration, formulation, collection and presentation, and, at each stage, experiences various feelings ranging from optimism and satisfaction to confusion and disappointment. Personally, I occasionally suffer despair, but perhaps that is too extreme for most!
    It is important to understand the origins of Kuhlthau's ideas in the work of the educational theorists, Dewey, Kelly and Bruner. Putting the matter in a rather simplistic manner, Dewey identified stages of cognition, Kelly attached the idea of feelings being associated with cognitive stages, and Bruner added the notion of actions associated with both. We can see this framework underlying Kuhlthau's research in her description of the actions undertaken at different stages in the search process and the associated feelings. Central to the transfer of these ideas to practice is the notion of the 'Zone of Intervention' or the point at which an information seeker can proceed more effectively with assistance than without. Kuhlthau identifies five intervention zones, the first of which involves intervention by the information seeker him/herself. The remaining four involve interventions of different kinds, which the author distinguishes according to the level of mediation required: zone 2 involves the librarian as 'locater', i.e., providing the quick reference response; zone 3, as 'identifier', i.e., discovering potentially useful information resources, but taking no further interest in the user; zone 4 as 'advisor', i.e., not only identifying possibly helpful resources, but guiding the user through them, and zone 5 as 'counsellor', which might be seen as a more intensive version of the advisor, guiding not simply on the sources, but also on the overall process, through a continuing interaction with the user. Clearly, these processes can be used in workshops, conference presentations and the classroom to sensitise the practioner and the student to the range of helping strategies that ought to be made available to the information seeker. However, the author goes further, identifying a further set of strategies for intervening in the search process, which she describes as 'collaborating', 'continuing', 'choosing', 'charting', 'conversing' and 'composing'. 'Collaboration' clearly involves the participation of others - fellow students, work peers, fellow researchers, or whatever, in the search process; 'continuing' intervention is associated with information seeking that involves a succession of actions - the intermediary 'stays with' the searcher throughout the process, available as needed to support him/her; 'choosing', that is, enabling the information seeker to identify the available choices in any given situation; 'charting' involves presenting a graphic illustration of the overall process and locating the information seeker in that chart; 'conversing' is the encouragement of discussion about the problem(s), and 'composing' involves the librarian as counsellor in encouraging the information seeker to document his/her experience, perhaps by keeping a diary of the process.
    Together with the zones of intervention, these ideas, and others set out in the book, provide a very powerful didactic mechanism for improving library and information service delivery. Of course, other things are necessary - the motivation to work in this way, and the availability resources to enable its accomplishment. Sadly, at least in the UK, many libraries today are too financially pressed to do much more than the minimum helpful intervention in the information seeking process. However, that should not serve as a stick with which to beat the author: not only has she performed work of genuine significance in the field of human information behaviour, she has demonstrated beyond question that the ideas that have emerged from her research have the capability to help to deliver more effective services." Auch unter: http://informationr.net/ir/reviews/revs129.html
    LCSH
    Information retrieval
    Subject
    Information retrieval
    Theme
    Information
  6. Towards the Semantic Web : ontology-driven knowledge management (2004) 0.00
    0.0029596263 = product of:
      0.011838505 = sum of:
        0.011838505 = weight(_text_:information in 4401) [ClassicSimilarity], result of:
          0.011838505 = score(doc=4401,freq=22.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.19298252 = fieldWeight in 4401, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=4401)
      0.25 = coord(1/4)
    
    Abstract
    With the current changes driven by the expansion of the World Wide Web, this book uses a different approach from other books on the market: it applies ontologies to electronically available information to improve the quality of knowledge management in large and distributed organizations. Ontologies are formal theories supporting knowledge sharing and reuse. They can be used to explicitly represent semantics of semi-structured information. These enable sophisticated automatic support for acquiring, maintaining and accessing information. Methodology and tools are developed for intelligent access to large volumes of semi-structured and textual information sources in intra- and extra-, and internet-based environments to employ the full power of ontologies in supporting knowledge management from the information client perspective and the information provider. The aim of the book is to support efficient and effective knowledge management and focuses on weakly-structured online information sources. It is aimed primarily at researchers in the area of knowledge management and information retrieval and will also be a useful reference for students in computer science at the postgraduate level and for business managers who are aiming to increase the corporations' information infrastructure. The Semantic Web is a very important initiative affecting the future of the WWW that is currently generating huge interest. The book covers several highly significant contributions to the semantic web research effort, including a new language for defining ontologies, several novel software tools and a coherent methodology for the application of the tools for business advantage. It also provides 3 case studies which give examples of the real benefits to be derived from the adoption of semantic-web based ontologies in "real world" situations. As such, the book is an excellent mixture of theory, tools and applications in an important area of WWW research. * Provides guidelines for introducing knowledge management concepts and tools into enterprises, to help knowledge providers present their knowledge efficiently and effectively. * Introduces an intelligent search tool that supports users in accessing information and a tool environment for maintenance, conversion and acquisition of information sources. * Discusses three large case studies which will help to develop the technology according to the actual needs of large and or virtual organisations and will provide a testbed for evaluating tools and methods. The book is aimed at people with at least a good understanding of existing WWW technology and some level of technical understanding of the underpinning technologies (XML/RDF). It will be of interest to graduate students, academic and industrial researchers in the field, and the many industrial personnel who are tracking WWW technology developments in order to understand the business implications. It could also be used to support undergraduate courses in the area but is not itself an introductory text.
  7. Spinning the Semantic Web : bringing the World Wide Web to its full potential (2003) 0.00
    0.0027544592 = product of:
      0.011017837 = sum of:
        0.011017837 = weight(_text_:information in 1981) [ClassicSimilarity], result of:
          0.011017837 = score(doc=1981,freq=14.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.1796046 = fieldWeight in 1981, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1981)
      0.25 = coord(1/4)
    
    Abstract
    As the World Wide Web continues to expand, it becomes increasingly difficult for users to obtain information efficiently. Because most search engines read format languages such as HTML or SGML, search results reflect formatting tags more than actual page content, which is expressed in natural language. Spinning the Semantic Web describes an exciting new type of hierarchy and standardization that will replace the current "Web of links" with a "Web of meaning." Using a flexible set of languages and tools, the Semantic Web will make all available information - display elements, metadata, services, images, and especially content - accessible. The result will be an immense repository of information accessible for a wide range of new applications. This first handbook for the Semantic Web covers, among other topics, software agents that can negotiate and collect information, markup languages that can tag many more types of information in a document, and knowledge systems that enable machines to read Web pages and determine their reliability. The truly interdisciplinary Semantic Web combines aspects of artificial intelligence, markup languages, natural language processing, information retrieval, knowledge representation, intelligent agents, and databases.
    Content
    Inhalt: Tim Bemers-Lee: The Original Dream - Re-enter Machines - Where Are We Now? - The World Wide Web Consortium - Where Is the Web Going Next? / Dieter Fensel, James Hendler, Henry Lieberman, and Wolfgang Wahlster: Why Is There a Need for the Semantic Web and What Will It Provide? - How the Semantic Web Will Be Possible / Jeff Heflin, James Hendler, and Sean Luke: SHOE: A Blueprint for the Semantic Web / Deborah L. McGuinness, Richard Fikes, Lynn Andrea Stein, and James Hendler: DAML-ONT: An Ontology Language for the Semantic Web / Michel Klein, Jeen Broekstra, Dieter Fensel, Frank van Harmelen, and Ian Horrocks: Ontologies and Schema Languages on the Web / Borys Omelayenko, Monica Crubezy, Dieter Fensel, Richard Benjamins, Bob Wielinga, Enrico Motta, Mark Musen, and Ying Ding: UPML: The Language and Tool Support for Making the Semantic Web Alive / Deborah L. McGuinness: Ontologies Come of Age / Jeen Broekstra, Arjohn Kampman, and Frank van Harmelen: Sesame: An Architecture for Storing and Querying RDF Data and Schema Information / Rob Jasper and Mike Uschold: Enabling Task-Centered Knowledge Support through Semantic Markup / Yolanda Gil: Knowledge Mobility: Semantics for the Web as a White Knight for Knowledge-Based Systems / Sanjeev Thacker, Amit Sheth, and Shuchi Patel: Complex Relationships for the Semantic Web / Alexander Maedche, Steffen Staab, Nenad Stojanovic, Rudi Studer, and York Sure: SEmantic portAL: The SEAL Approach / Ora Lassila and Mark Adler: Semantic Gadgets: Ubiquitous Computing Meets the Semantic Web / Christopher Frye, Mike Plusch, and Henry Lieberman: Static and Dynamic Semantics of the Web / Masahiro Hori: Semantic Annotation for Web Content Adaptation / Austin Tate, Jeff Dalton, John Levine, and Alex Nixon: Task-Achieving Agents on the World Wide Web
  8. Hutchins, W.J.; Somers, H.L.: ¬An introduction to machine translation (1992) 0.00
    0.0021033147 = product of:
      0.008413259 = sum of:
        0.008413259 = weight(_text_:information in 4512) [ClassicSimilarity], result of:
          0.008413259 = score(doc=4512,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.13714671 = fieldWeight in 4512, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4512)
      0.25 = coord(1/4)
    
    Abstract
    The translation of foreign language texts by computers was one of the first tasks that the pioneers of Computing and Artificial Intelligence set themselves. Machine translation is again becoming an importantfield of research and development as the need for translations of technical and commercial documentation is growing well beyond the capacity of the translation profession.This is the first textbook of machine translation, providing a full course on both general machine translation systems characteristics and the computational linguistic foundations of the field. The book assumes no previous knowledge of machine translation and provides the basic background information to the linguistic and computational linguistics, artificial intelligence, natural language processing and information science.

Languages

Types

Classifications