Search (7 results, page 1 of 1)

  • × classification_ss:"TVV (DU)"
  1. Kuropka, D.: Modelle zur Repräsentation natürlichsprachlicher Dokumente : Ontologie-basiertes Information-Filtering und -Retrieval mit relationalen Datenbanken (2004) 0.02
    0.019294377 = product of:
      0.05788313 = sum of:
        0.05788313 = weight(_text_:retrieval in 4325) [ClassicSimilarity], result of:
          0.05788313 = score(doc=4325,freq=10.0), product of:
            0.15490976 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.051211275 = queryNorm
            0.37365708 = fieldWeight in 4325, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4325)
      0.33333334 = coord(1/3)
    
    Abstract
    Kostengünstige Massenspeicher und die zunehmende Vernetzung von Rechnern haben die Anzahl der Dokumente, auf die ein einzelnes Individuum zugreifen kann (bspw. Webseiten) oder die auf das Individuum einströmen (bspw. E-Mails), in den letzten Jahren rapide ansteigen lassen. In immer mehr Bereichen der Wirtschaft, Wissenschaft und Verwaltung nimmt der Bedarf an hochwertigen Information-Filtering und -Retrieval Werkzeugen zur Beherrschung der Informationsflut zu. Zur computergestützten Lösung dieser Problemstellung sind Modelle zur Repräsentation natürlichsprachlicher Dokumente erforderlich, um formale Kriterien für die automatisierte Auswahl relevanter Dokumente definieren zu können. Dominik Kuropka gibt in seiner Arbeit eine umfassende Übersicht über den Themenbereich der Suche und Filterung von natürlichsprachlichen Dokumenten. Es wird eine Vielzahl von Modellen aus Forschung und Praxis vorgestellt und evaluiert. Auf den Ergebnissen aufbauend wird das Potenzial von Ontologien in diesem Zusammenhang eruiert und es wird ein neues, ontologie-basiertes Modell für das Information-Filtering und -Retrieval erarbeitet, welches anhand von Text- und Code-Beispielen ausführlich erläutert wird. Das Buch richtet sich an Dozenten und Studenten der Informatik, Wirtschaftsinformatik und (Computer-)Linguistik sowie an Systemdesigner und Entwickler von dokumentenorientierten Anwendungssystemen und Werkzeugen.
    RSWK
    Natürlichsprachiges System / Dokumentverarbeitung / Wissensrepräsentation / Benutzermodell / Information Retrieval / Relationales Datenmodell
    Subject
    Natürlichsprachiges System / Dokumentverarbeitung / Wissensrepräsentation / Benutzermodell / Information Retrieval / Relationales Datenmodell
  2. Kuropka, D.: Modelle zur Repräsentation natürlichsprachlicher Dokumente : Ontologie-basiertes Information-Filtering und -Retrieval mit relationalen Datenbanken (2004) 0.02
    0.019294377 = product of:
      0.05788313 = sum of:
        0.05788313 = weight(_text_:retrieval in 4385) [ClassicSimilarity], result of:
          0.05788313 = score(doc=4385,freq=10.0), product of:
            0.15490976 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.051211275 = queryNorm
            0.37365708 = fieldWeight in 4385, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4385)
      0.33333334 = coord(1/3)
    
    Abstract
    Kostengünstige Massenspeicher und die zunehmende Vernetzung von Rechnern haben die Anzahl der Dokumente, auf die ein einzelnes Individuum zugreifen kann (bspw. Webseiten) oder die auf das Individuum einströmen (bspw. E-Mails), in den letzten Jahren rapide ansteigen lassen. In immer mehr Bereichen der Wirtschaft, Wissenschaft und Verwaltung nimmt der Bedarf an hochwertigen Information-Filtering und -Retrieval Werkzeugen zur Beherrschung der Informationsflut zu. Zur computergestützten Lösung dieser Problemstellung sind Modelle zur Repräsentation natürlichsprachlicher Dokumente erforderlich, um formale Kriterien für die automatisierte Auswahl relevanter Dokumente definieren zu können. Dominik Kuropka gibt in seiner Arbeit eine umfassende Übersicht über den Themenbereich der Suche und Filterung von natürlichsprachlichen Dokumenten. Es wird eine Vielzahl von Modellen aus Forschung und Praxis vorgestellt und evaluiert. Auf den Ergebnissen aufbauend wird das Potenzial von Ontologien in diesem Zusammenhang eruiert und es wird ein neues, ontologie-basiertes Modell für das Information-Filtering und -Retrieval erarbeitet, welches anhand von Text- und Code-Beispielen ausführlich erläutert wird. Das Buch richtet sich an Dozenten und Studenten der Informatik, Wirtschaftsinformatik und (Computer-)Linguistik sowie an Systemdesigner und Entwickler von dokumentenorientierten Anwendungssystemen und Werkzeugen.
    RSWK
    Natürlichsprachiges System / Dokumentverarbeitung / Wissensrepräsentation / Benutzermodell / Information Retrieval / Relationales Datenmodell
    Subject
    Natürlichsprachiges System / Dokumentverarbeitung / Wissensrepräsentation / Benutzermodell / Information Retrieval / Relationales Datenmodell
  3. Jacquemin, C.: Spotting and discovering terms through natural language processing (2001) 0.02
    0.019294377 = product of:
      0.05788313 = sum of:
        0.05788313 = weight(_text_:retrieval in 119) [ClassicSimilarity], result of:
          0.05788313 = score(doc=119,freq=10.0), product of:
            0.15490976 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.051211275 = queryNorm
            0.37365708 = fieldWeight in 119, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=119)
      0.33333334 = coord(1/3)
    
    Abstract
    In this book Christian Jacquemin shows how the power of natural language processing (NLP) can be used to advance text indexing and information retrieval (IR). Jacquemin's novel tool is FASTR, a parser that normalizes terms and recognizes term variants. Since there are more meanings in a language than there are words, FASTR uses a metagrammar composed of shallow linguistic transformations that describe the morphological, syntactic, semantic, and pragmatic variations of words and terms. The acquired parsed terms can then be applied for precise retrieval and assembly of information. The use of a corpus-based unification grammar to define, recognize, and combine term variants from their base forms allows for intelligent information access to, or "linguistic data tuning" of, heterogeneous texts. FASTR can be used to do automatic controlled indexing, to carry out content-based Web searches through conceptually related alternative query formulations, to abstract scientific and technical extracts, and even to translate and collect terms from multilingual material. Jacquemin provides a comprehensive account of the method and implementation of this innovative retrieval technique for text processing.
    RSWK
    Automatische Indexierung  / Computerlinguistik  / Information Retrieval
    Subject
    Automatische Indexierung  / Computerlinguistik  / Information Retrieval
  4. Kuhlthau, C.C: Seeking meaning : a process approach to library and information services (2004) 0.02
    0.018982112 = product of:
      0.028473169 = sum of:
        0.018304251 = weight(_text_:retrieval in 3347) [ClassicSimilarity], result of:
          0.018304251 = score(doc=3347,freq=4.0), product of:
            0.15490976 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.051211275 = queryNorm
            0.11816074 = fieldWeight in 3347, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3347)
        0.010168918 = product of:
          0.020337837 = sum of:
            0.020337837 = weight(_text_:conference in 3347) [ClassicSimilarity], result of:
              0.020337837 = score(doc=3347,freq=2.0), product of:
                0.19418365 = queryWeight, product of:
                  3.7918143 = idf(docFreq=2710, maxDocs=44218)
                  0.051211275 = queryNorm
                0.10473506 = fieldWeight in 3347, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7918143 = idf(docFreq=2710, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3347)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Footnote
    It is important to understand the origins of Kuhlthau's ideas in the work of the educational theorists, Dewey, Kelly and Bruner. Putting the matter in a rather simplistic manner, Dewey identified stages of cognition, Kelly attached the idea of feelings being associated with cognitive stages, and Bruner added the notion of actions associated with both. We can see this framework underlying Kuhlthau's research in her description of the actions undertaken at different stages in the search process and the associated feelings. Central to the transfer of these ideas to practice is the notion of the 'Zone of Intervention' or the point at which an information seeker can proceed more effectively with assistance than without. Kuhlthau identifies five intervention zones, the first of which involves intervention by the information seeker him/herself. The remaining four involve interventions of different kinds, which the author distinguishes according to the level of mediation required: zone 2 involves the librarian as 'locater', i.e., providing the quick reference response; zone 3, as 'identifier', i.e., discovering potentially useful information resources, but taking no further interest in the user; zone 4 as 'advisor', i.e., not only identifying possibly helpful resources, but guiding the user through them, and zone 5 as 'counsellor', which might be seen as a more intensive version of the advisor, guiding not simply on the sources, but also on the overall process, through a continuing interaction with the user. Clearly, these processes can be used in workshops, conference presentations and the classroom to sensitise the practioner and the student to the range of helping strategies that ought to be made available to the information seeker. However, the author goes further, identifying a further set of strategies for intervening in the search process, which she describes as 'collaborating', 'continuing', 'choosing', 'charting', 'conversing' and 'composing'. 'Collaboration' clearly involves the participation of others - fellow students, work peers, fellow researchers, or whatever, in the search process; 'continuing' intervention is associated with information seeking that involves a succession of actions - the intermediary 'stays with' the searcher throughout the process, available as needed to support him/her; 'choosing', that is, enabling the information seeker to identify the available choices in any given situation; 'charting' involves presenting a graphic illustration of the overall process and locating the information seeker in that chart; 'conversing' is the encouragement of discussion about the problem(s), and 'composing' involves the librarian as counsellor in encouraging the information seeker to document his/her experience, perhaps by keeping a diary of the process.
    LCSH
    Information retrieval
    Subject
    Information retrieval
  5. Spinning the Semantic Web : bringing the World Wide Web to its full potential (2003) 0.01
    0.0060400954 = product of:
      0.018120285 = sum of:
        0.018120285 = weight(_text_:retrieval in 1981) [ClassicSimilarity], result of:
          0.018120285 = score(doc=1981,freq=2.0), product of:
            0.15490976 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.051211275 = queryNorm
            0.11697317 = fieldWeight in 1981, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1981)
      0.33333334 = coord(1/3)
    
    Abstract
    As the World Wide Web continues to expand, it becomes increasingly difficult for users to obtain information efficiently. Because most search engines read format languages such as HTML or SGML, search results reflect formatting tags more than actual page content, which is expressed in natural language. Spinning the Semantic Web describes an exciting new type of hierarchy and standardization that will replace the current "Web of links" with a "Web of meaning." Using a flexible set of languages and tools, the Semantic Web will make all available information - display elements, metadata, services, images, and especially content - accessible. The result will be an immense repository of information accessible for a wide range of new applications. This first handbook for the Semantic Web covers, among other topics, software agents that can negotiate and collect information, markup languages that can tag many more types of information in a document, and knowledge systems that enable machines to read Web pages and determine their reliability. The truly interdisciplinary Semantic Web combines aspects of artificial intelligence, markup languages, natural language processing, information retrieval, knowledge representation, intelligent agents, and databases.
  6. Schweibenz, W.; Thissen, F.: Qualität im Web : Benutzerfreundliche Webseiten durch Usability Evaluation (2003) 0.01
    0.0057820175 = product of:
      0.017346052 = sum of:
        0.017346052 = product of:
          0.034692105 = sum of:
            0.034692105 = weight(_text_:22 in 767) [ClassicSimilarity], result of:
              0.034692105 = score(doc=767,freq=2.0), product of:
                0.17933317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051211275 = queryNorm
                0.19345059 = fieldWeight in 767, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=767)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 3.2008 14:24:08
  7. Towards the Semantic Web : ontology-driven knowledge management (2004) 0.01
    0.0051772245 = product of:
      0.015531673 = sum of:
        0.015531673 = weight(_text_:retrieval in 4401) [ClassicSimilarity], result of:
          0.015531673 = score(doc=4401,freq=2.0), product of:
            0.15490976 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.051211275 = queryNorm
            0.10026272 = fieldWeight in 4401, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0234375 = fieldNorm(doc=4401)
      0.33333334 = coord(1/3)
    
    Abstract
    With the current changes driven by the expansion of the World Wide Web, this book uses a different approach from other books on the market: it applies ontologies to electronically available information to improve the quality of knowledge management in large and distributed organizations. Ontologies are formal theories supporting knowledge sharing and reuse. They can be used to explicitly represent semantics of semi-structured information. These enable sophisticated automatic support for acquiring, maintaining and accessing information. Methodology and tools are developed for intelligent access to large volumes of semi-structured and textual information sources in intra- and extra-, and internet-based environments to employ the full power of ontologies in supporting knowledge management from the information client perspective and the information provider. The aim of the book is to support efficient and effective knowledge management and focuses on weakly-structured online information sources. It is aimed primarily at researchers in the area of knowledge management and information retrieval and will also be a useful reference for students in computer science at the postgraduate level and for business managers who are aiming to increase the corporations' information infrastructure. The Semantic Web is a very important initiative affecting the future of the WWW that is currently generating huge interest. The book covers several highly significant contributions to the semantic web research effort, including a new language for defining ontologies, several novel software tools and a coherent methodology for the application of the tools for business advantage. It also provides 3 case studies which give examples of the real benefits to be derived from the adoption of semantic-web based ontologies in "real world" situations. As such, the book is an excellent mixture of theory, tools and applications in an important area of WWW research. * Provides guidelines for introducing knowledge management concepts and tools into enterprises, to help knowledge providers present their knowledge efficiently and effectively. * Introduces an intelligent search tool that supports users in accessing information and a tool environment for maintenance, conversion and acquisition of information sources. * Discusses three large case studies which will help to develop the technology according to the actual needs of large and or virtual organisations and will provide a testbed for evaluating tools and methods. The book is aimed at people with at least a good understanding of existing WWW technology and some level of technical understanding of the underpinning technologies (XML/RDF). It will be of interest to graduate students, academic and industrial researchers in the field, and the many industrial personnel who are tracking WWW technology developments in order to understand the business implications. It could also be used to support undergraduate courses in the area but is not itself an introductory text.

Languages

Types