Search (7 results, page 1 of 1)

  • × classification_ss:"54.72 / Künstliche Intelligenz"
  1. Hüttenegger, G.: Open Source Knowledge Management [Open-source-knowledge-Management] (2006) 0.04
    0.041229755 = product of:
      0.16491902 = sum of:
        0.16491902 = weight(_text_:open in 652) [ClassicSimilarity], result of:
          0.16491902 = score(doc=652,freq=20.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.78667694 = fieldWeight in 652, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=652)
      0.25 = coord(1/4)
    
    Abstract
    Das Buch präsentiert die vielfältigen Möglichkeiten von Open-Source-Software zur Unterstützung von Wissensmanagement. Der Autor erläutert die Grundlagen und Einsatzmöglichkeiten von Open-Source-Software beim Knowledge Management und entwickelt auf Grund von Analysen konkreter Open-Source-Produkte Entscheidungskriterien und Anleitungen für die Verbesserung von Knowledge Management- und Open-Source-Software. Kosteneinsparungen und Effizienz finden dabei besondere Beachtung.
    Content
    Inhalt: Definitionen von Knowledge, Knowledge Management und Open Source.- Vision eines Knowledge Management- (KM-) Systems.- Vorhandene Open-Source-Basis.- Technische Basis.- Start mit einem Groupware System.- Alternativ Start mit einem Content-Management-System.- Einbinden von Groupware oder CMS bzw. Erweitern um DMS.- Weiterer Ausbau.- Zusammenfassungen, Abschluss und Ausblick.- Literaturverzeichnis.- Index.
    RSWK
    Wissensmanagement / Open Source
    Subject
    Wissensmanagement / Open Source
  2. Bizer, C.; Heath, T.: Linked Data : evolving the web into a global data space (2011) 0.03
    0.026769744 = product of:
      0.05353949 = sum of:
        0.041721575 = weight(_text_:open in 4725) [ClassicSimilarity], result of:
          0.041721575 = score(doc=4725,freq=2.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.19901526 = fieldWeight in 4725, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.03125 = fieldNorm(doc=4725)
        0.0118179135 = product of:
          0.023635827 = sum of:
            0.023635827 = weight(_text_:access in 4725) [ClassicSimilarity], result of:
              0.023635827 = score(doc=4725,freq=2.0), product of:
                0.15778996 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046553567 = queryNorm
                0.14979297 = fieldWeight in 4725, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4725)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The World Wide Web has enabled the creation of a global information space comprising linked documents. As the Web becomes ever more enmeshed with our daily lives, there is a growing desire for direct access to raw data not currently available on the Web or bound up in hypertext documents. Linked Data provides a publishing paradigm in which not only documents, but also data, can be a first class citizen of the Web, thereby enabling the extension of the Web with a global data space based on open standards - the Web of Data. In this Synthesis lecture we provide readers with a detailed technical introduction to Linked Data. We begin by outlining the basic principles of Linked Data, including coverage of relevant aspects of Web architecture. The remainder of the text is based around two main themes - the publication and consumption of Linked Data. Drawing on a practical Linked Data scenario, we provide guidance and best practices on: architectural approaches to publishing Linked Data; choosing URIs and vocabularies to identify and describe resources; deciding what data to return in a description of a resource on the Web; methods and frameworks for automated linking of data sets; and testing and debugging approaches for Linked Data deployments. We give an overview of existing Linked Data applications and then examine the architectures that are used to consume Linked Data from the Web, alongside existing tools and frameworks that enable these. Readers can expect to gain a rich technical understanding of Linked Data fundamentals, as the basis for application development, research or further study.
  3. Multimedia content and the Semantic Web : methods, standards, and tools (2005) 0.01
    0.010521023 = product of:
      0.04208409 = sum of:
        0.04208409 = sum of:
          0.014772392 = weight(_text_:access in 150) [ClassicSimilarity], result of:
            0.014772392 = score(doc=150,freq=2.0), product of:
              0.15778996 = queryWeight, product of:
                3.389428 = idf(docFreq=4053, maxDocs=44218)
                0.046553567 = queryNorm
              0.093620606 = fieldWeight in 150, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.389428 = idf(docFreq=4053, maxDocs=44218)
                0.01953125 = fieldNorm(doc=150)
          0.027311698 = weight(_text_:22 in 150) [ClassicSimilarity], result of:
            0.027311698 = score(doc=150,freq=6.0), product of:
              0.16302267 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046553567 = queryNorm
              0.16753313 = fieldWeight in 150, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.01953125 = fieldNorm(doc=150)
      0.25 = coord(1/4)
    
    Classification
    006.7 22
    Date
    7. 3.2007 19:30:22
    DDC
    006.7 22
    Footnote
    The final part of the book discusses research in multimedia content management systems and the semantic web, and presents examples and applications for semantic multimedia analysis in search and retrieval systems. These chapters describe example systems in which current projects have been implemented, and include extensive results and real demonstrations. For example, real case scenarios such as ECommerce medical applications and Web services have been introduced. Topics in natural language, speech and image processing techniques and their application for multimedia indexing, and content-based retrieval have been elaborated upon with extensive examples and deployment methods. The editors of the book themselves provide the readers with a chapter about their latest research results on knowledge-based multimedia content indexing and retrieval. Some interesting applications for multimedia content and the semantic web are introduced. Applications that have taken advantage of the metadata provided by MPEG7 in order to realize advance-access services for multimedia content have been provided. The applications discussed in the third part of the book provide useful guidance to researchers and practitioners properly planning to implement semantic multimedia analysis techniques in new research and development projects in both academia and industry. A fourth part should be added to this book: performance measurements for integrated approaches of multimedia analysis and the semantic web. Performance of the semantic approach is a very sophisticated issue and requires extensive elaboration and effort. Measuring the semantic search is an ongoing research area; several chapters concerning performance measurement and analysis would be required to adequately cover this area and introduce it to readers."
  4. Hermans, J.: Ontologiebasiertes Information Retrieval für das Wissensmanagement (2008) 0.01
    0.010430394 = product of:
      0.041721575 = sum of:
        0.041721575 = weight(_text_:open in 506) [ClassicSimilarity], result of:
          0.041721575 = score(doc=506,freq=2.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.19901526 = fieldWeight in 506, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.03125 = fieldNorm(doc=506)
      0.25 = coord(1/4)
    
    Abstract
    Unternehmen sehen sich heutzutage regelmäßig der Herausforderung gegenübergestellt, aus umfangreichen Mengen an Dokumenten schnell relevante Informationen zu identifizieren. Dabei zeigt sich jedoch, dass Suchverfahren, die lediglich syntaktische Abgleiche von Informationsbedarfen mit potenziell relevanten Dokumenten durchführen, häufig nicht die an sie gestellten Erwartungen erfüllen. Viel versprechendes Potenzial bietet hier der Einsatz von Ontologien für das Information Retrieval. Beim ontologiebasierten Information Retrieval werden Ontologien eingesetzt, um Wissen in einer Form abzubilden, die durch Informationssysteme verarbeitet werden kann. Eine Berücksichtigung des so explizierten Wissens durch Suchalgorithmen führt dann zu einer optimierten Deckung von Informationsbedarfen. Jan Hermans stellt in seinem Buch ein adaptives Referenzmodell für die Entwicklung von ontologiebasierten Information Retrieval-Systemen vor. Zentrales Element seines Modells ist die einsatzkontextspezifische Adaption des Retrievalprozesses durch bewährte Techniken, die ausgewählte Aspekte des ontologiebasierten Information Retrievals bereits effektiv und effizient unterstützen. Die Anwendung des Referenzmodells wird anhand eines Fallbeispiels illustriert, bei dem ein Information Retrieval-System für die Suche nach Open Source-Komponenten entwickelt wird. Das Buch richtet sich gleichermaßen an Dozenten und Studierende der Wirtschaftsinformatik, Informatik und Betriebswirtschaftslehre sowie an Praktiker, die die Informationssuche im Unternehmen verbessern möchten. Jan Hermans, Jahrgang 1978, studierte Wirtschaftsinformatik an der Westfälischen Wilhelms-Universität in Münster. Seit 2003 war er als Wissenschaftlicher Mitarbeiter am European Research Center for Information Systems der WWU Münster tätig. Seine Forschungsschwerpunkte lagen in den Bereichen Wissensmanagement und Information Retrieval. Im Mai 2008 erfolgte seine Promotion zum Doktor der Wirtschaftswissenschaften.
  5. Information visualization in data mining and knowledge discovery (2002) 0.01
    0.006108161 = product of:
      0.024432644 = sum of:
        0.024432644 = sum of:
          0.0118179135 = weight(_text_:access in 1789) [ClassicSimilarity], result of:
            0.0118179135 = score(doc=1789,freq=2.0), product of:
              0.15778996 = queryWeight, product of:
                3.389428 = idf(docFreq=4053, maxDocs=44218)
                0.046553567 = queryNorm
              0.074896485 = fieldWeight in 1789, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.389428 = idf(docFreq=4053, maxDocs=44218)
                0.015625 = fieldNorm(doc=1789)
          0.012614732 = weight(_text_:22 in 1789) [ClassicSimilarity], result of:
            0.012614732 = score(doc=1789,freq=2.0), product of:
              0.16302267 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046553567 = queryNorm
              0.07738023 = fieldWeight in 1789, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.015625 = fieldNorm(doc=1789)
      0.25 = coord(1/4)
    
    Date
    23. 3.2008 19:10:22
    Footnote
    In 13 chapters, Part Two provides an introduction to KDD, an overview of data mining techniques, and examples of the usefulness of data model visualizations. The importance of visualization throughout the KDD process is stressed in many of the chapters. In particular, the need for measures of visualization effectiveness, benchmarking for identifying best practices, and the use of standardized sample data sets is convincingly presented. Many of the important data mining approaches are discussed in this complementary context. Cluster and outlier detection, classification techniques, and rule discovery algorithms are presented as the basic techniques common to the KDD process. The potential effectiveness of using visualization in the data modeling process are illustrated in chapters focused an using visualization for helping users understand the KDD process, ask questions and form hypotheses about their data, and evaluate the accuracy and veracity of their results. The 11 chapters of Part Three provide an overview of the KDD process and successful approaches to integrating KDD, data mining, and visualization in complementary domains. Rhodes (Chapter 21) begins this section with an excellent overview of the relation between the KDD process and data mining techniques. He states that the "primary goals of data mining are to describe the existing data and to predict the behavior or characteristics of future data of the same type" (p. 281). These goals are met by data mining tasks such as classification, regression, clustering, summarization, dependency modeling, and change or deviation detection. Subsequent chapters demonstrate how visualization can aid users in the interactive process of knowledge discovery by graphically representing the results from these iterative tasks. Finally, examples of the usefulness of integrating visualization and data mining tools in the domain of business, imagery and text mining, and massive data sets are provided. This text concludes with a thorough and useful 17-page index and lengthy yet integrating 17-page summary of the academic and industrial backgrounds of the contributing authors. A 16-page set of color inserts provide a better representation of the visualizations discussed, and a URL provided suggests that readers may view all the book's figures in color on-line, although as of this submission date it only provides access to a summary of the book and its contents. The overall contribution of this work is its focus an bridging two distinct areas of research, making it a valuable addition to the Morgan Kaufmann Series in Database Management Systems. The editors of this text have met their main goal of providing the first textbook integrating knowledge discovery, data mining, and visualization. Although it contributes greatly to our under- standing of the development and current state of the field, a major weakness of this text is that there is no concluding chapter to discuss the contributions of the sum of these contributed papers or give direction to possible future areas of research. "Integration of expertise between two different disciplines is a difficult process of communication and reeducation. Integrating data mining and visualization is particularly complex because each of these fields in itself must draw an a wide range of research experience" (p. 300). Although this work contributes to the crossdisciplinary communication needed to advance visualization in KDD, a more formal call for an interdisciplinary research agenda in a concluding chapter would have provided a more satisfying conclusion to a very good introductory text.
  6. Handbuch der Künstlichen Intelligenz (2003) 0.01
    0.005518945 = product of:
      0.02207578 = sum of:
        0.02207578 = product of:
          0.04415156 = sum of:
            0.04415156 = weight(_text_:22 in 2916) [ClassicSimilarity], result of:
              0.04415156 = score(doc=2916,freq=2.0), product of:
                0.16302267 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046553567 = queryNorm
                0.2708308 = fieldWeight in 2916, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2916)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    21. 3.2008 19:10:22
  7. Social information retrieval systems : emerging technologies and applications for searching the Web effectively (2008) 0.00
    0.0041782632 = product of:
      0.016713053 = sum of:
        0.016713053 = product of:
          0.033426106 = sum of:
            0.033426106 = weight(_text_:access in 4127) [ClassicSimilarity], result of:
              0.033426106 = score(doc=4127,freq=4.0), product of:
                0.15778996 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046553567 = queryNorm
                0.21183924 = fieldWeight in 4127, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4127)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    LCSH
    World Wide Web / Subject access
    Subject
    World Wide Web / Subject access

Languages

Types