Search (6 results, page 1 of 1)

  • × classification_ss:"54.62 / Datenstrukturen"
  1. Frohner, H.: Social Tagging : Grundlagen, Anwendungen, Auswirkungen auf Wissensorganisation und soziale Strukturen der User (2010) 0.01
    0.013346733 = product of:
      0.05338693 = sum of:
        0.05338693 = weight(_text_:von in 4723) [ClassicSimilarity], result of:
          0.05338693 = score(doc=4723,freq=16.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.416867 = fieldWeight in 4723, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4723)
      0.25 = coord(1/4)
    
    Abstract
    Social Tagging ist eine Methode zur semantischen Datenorganisation. Im Unterschied zu traditionellen Ansätzen wird die Kategorisierung nicht von Experten vorgenommen, sondern von einer Vielzahl von Benutzern gemeinschaftlich entwickelt. Bezüglich der Daten existieren grundsätzlich keinerlei Einschränkungen. Dabei kann es sich sowohl um multimediale Inhalte als auch um wissenschaftliche Literatur handeln. Jeder Benutzer, unabhängig von Expertise oder Intention, ist aufgefordert, mithilfe von frei gewählten Tags die Kategorisierung der verwendeten Ressourcen zu unterstützen. Insgesamt entsteht dadurch eine Sammlung verschiedenster subjektiver Einschätzungen, die zusammen eine umfassende semantische Organisation bestimmter Inhalte darstellen. Ziel dieses Buches ist es, zunächst die Grundlagen und Anwendungen von Social Tagging zu erörtern und dann speziell die Effekte im Hinblick auf die Wissensorganisation und die sozialen Beziehungen der Benutzer zu analysieren. Eines der zentralen Ergebnisse dieser Arbeit ist die Erkenntnis, dass die gemeinschaftlich erzeugten Metadaten eine unerwartet hohe Qualität bzw. Bedeutsamkeit aufweisen, obwohl Mehrdeutigkeiten und verschiedene Schreibweisen diese negativ beeinflussen könnten. Social Tagging ist besonders effektiv für die Organisation von sehr großen oder auch heterogenen Daten-beständen, die mit herkömmlichen, experten-basierten Kategorisierungsverfahren nicht mehr verarbeitet werden können oder durch automatische Verfahren qualitativ schlechter indexiert werden. Durch Social Tagging wird nicht nur die Wissensorganisation gefördert, sondern darüber hinaus auch die Zusammenarbeit und der Aufbau von Communities, weshalb Social Tagging auch effizient in der Lehre eingesetzt werden kann.
  2. Erbarth, M.: Wissensrepräsentation mit semantischen Netzen : Grundlagen mit einem Anwendungsbeispiel für das Multi-Channel-Publishing (2006) 0.01
    0.009437565 = product of:
      0.03775026 = sum of:
        0.03775026 = weight(_text_:von in 714) [ClassicSimilarity], result of:
          0.03775026 = score(doc=714,freq=8.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.29476947 = fieldWeight in 714, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.0390625 = fieldNorm(doc=714)
      0.25 = coord(1/4)
    
    Abstract
    "Wir ertrinken in Informationen, aber uns dürstet nach Wissen." Trendforscher John Naisbitt drückt hiermit aus, dass es dem Menschen heute nicht mehr möglich ist die Informationsflut, die sich über ihn ergießt, effizient zu verwerten. Er lebt in einer globalisierten Welt mit einem vielfältigen Angebot an Medien, wie Presse, Radio, Fernsehen und dem Internet. Die Problematik der mangelnden Auswertbarkeit von großen Informationsmengen ist dabei vor allem im Internet akut. Die Quantität, Verbreitung, Aktualität und Verfügbarkeit sind die großen Vorteile des World Wide Web (WWW). Die Probleme liegen in der Qualität und Dichte der Informationen. Das Information Retrieval muss effizienter gestaltet werden, um den wirtschaftlichen und kulturellen Nutzen einer vernetzten Welt zu erhalten.Matthias Erbarth beleuchtet zunächst genau diesen Themenkomplex, um im Anschluss ein Format für elektronische Dokumente, insbesondere kommerzielle Publikationen, zu entwickeln. Dieses Anwendungsbeispiel stellt eine semantische Inhaltsbeschreibung mit Metadaten mittels XML vor, wobei durch Nutzung von Verweisen und Auswertung von Zusammenhängen insbesondere eine netzförmige Darstellung berücksichtigt wird.
    Footnote
    Zugl.: Pforzheim, Hochschule, Diplomarbeit, 2002 u.d.T.: Erbarth, Matthias: Abbildung einer Publikation als semantisches Netz unter Verwendung von XML-Technologien
  3. Learning XML : [creating self describing data] (2001) 0.00
    7.795323E-4 = product of:
      0.0031181292 = sum of:
        0.0031181292 = product of:
          0.009354387 = sum of:
            0.009354387 = weight(_text_:a in 1744) [ClassicSimilarity], result of:
              0.009354387 = score(doc=1744,freq=22.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.16900843 = fieldWeight in 1744, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1744)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Although Learning XML covers XML with a broad brush, it nevertheless presents the key elements of the technology with enough detail to familiarise the reader with the crucial markup language. This guide is brief enough to tackle in a weekend. Author Erik T Ray begins with an excellent summary of XML's history as an outgrowth of SGML and HTML. He outlines very clearly the elements of markup, demystifying concepts such as attributes, entities and namespaces with numerous clear examples. To illustrate a real-world XML application, he gives the reader a look at a document written in DocBook--a publicly available XML document type for publishing technical writings--and explains the sections of the document step by step. A simplified version of DocBook is used later in the book to illustrate transformation--a powerful benefit of XML. The all-important Document Type Definition (DTD) is covered in depth, but the still-unofficial alternative--XML Schema--is only briefly addressed. The author makes liberal use of graphical illustrations, tables and code to demonstrate concepts along the way, keeping the reader engaged and on track. Ray also gets into a deep discussion of programming XML utilities with Perl. Learning XML is a highly readable introduction to XML for readers with existing knowledge of markup and Web technologies, and it meets its goals very well--to deliver a broad perspective of XML and its potential.
  4. Bizer, C.; Heath, T.: Linked Data : evolving the web into a global data space (2011) 0.00
    7.795323E-4 = product of:
      0.0031181292 = sum of:
        0.0031181292 = product of:
          0.009354387 = sum of:
            0.009354387 = weight(_text_:a in 4725) [ClassicSimilarity], result of:
              0.009354387 = score(doc=4725,freq=22.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.16900843 = fieldWeight in 4725, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4725)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    The World Wide Web has enabled the creation of a global information space comprising linked documents. As the Web becomes ever more enmeshed with our daily lives, there is a growing desire for direct access to raw data not currently available on the Web or bound up in hypertext documents. Linked Data provides a publishing paradigm in which not only documents, but also data, can be a first class citizen of the Web, thereby enabling the extension of the Web with a global data space based on open standards - the Web of Data. In this Synthesis lecture we provide readers with a detailed technical introduction to Linked Data. We begin by outlining the basic principles of Linked Data, including coverage of relevant aspects of Web architecture. The remainder of the text is based around two main themes - the publication and consumption of Linked Data. Drawing on a practical Linked Data scenario, we provide guidance and best practices on: architectural approaches to publishing Linked Data; choosing URIs and vocabularies to identify and describe resources; deciding what data to return in a description of a resource on the Web; methods and frameworks for automated linking of data sets; and testing and debugging approaches for Linked Data deployments. We give an overview of existing Linked Data applications and then examine the architectures that are used to consume Linked Data from the Web, alongside existing tools and frameworks that enable these. Readers can expect to gain a rich technical understanding of Linked Data fundamentals, as the basis for application development, research or further study.
  5. Handbook on ontologies (2004) 0.00
    6.569507E-4 = product of:
      0.0026278028 = sum of:
        0.0026278028 = product of:
          0.007883408 = sum of:
            0.007883408 = weight(_text_:a in 1952) [ClassicSimilarity], result of:
              0.007883408 = score(doc=1952,freq=10.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.14243183 = fieldWeight in 1952, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1952)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    An ontology is a description (like a formal specification of a program) of concepts and relationships that can exist for an agent or a community of agents. The concept is important for the purpose of enabling knowledge sharing and reuse. The Handbook on Ontologies provides a comprehensive overview of the current status and future prospectives of the field of ontologies. The handbook demonstrates standards that have been created recently, it surveys methods that have been developed and it shows how to bring both into practice of ontology infrastructures and applications that are the best of their kind.
  6. Grossman, D.A.; Frieder, O.: Information retrieval : algorithms and heuristics (2004) 0.00
    5.757227E-4 = product of:
      0.0023028909 = sum of:
        0.0023028909 = product of:
          0.0069086724 = sum of:
            0.0069086724 = weight(_text_:a in 1486) [ClassicSimilarity], result of:
              0.0069086724 = score(doc=1486,freq=12.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.12482099 = fieldWeight in 1486, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1486)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Interested in how an efficient search engine works? Want to know what algorithms are used to rank resulting documents in response to user requests? The authors answer these and other key information on retrieval design and implementation questions is provided. This book is not yet another high level text. Instead, algorithms are thoroughly described, making this book ideally suited for both computer science students and practitioners who work on search-related applications. As stated in the foreword, this book provides a current, broad, and detailed overview of the field and is the only one that does so. Examples are used throughout to illustrate the algorithms. The authors explain how a query is ranked against a document collection using either a single or a combination of retrieval strategies, and how an assortment of utilities are integrated into the query processing scheme to improve these rankings. Methods for building and compressing text indexes, querying and retrieving documents in multiple languages, and using parallel or distributed processing to expedite the search are likewise described. This edition is a major expansion of the one published in 1998. Neuaufl. 2005: Besides updating the entire book with current techniques, it includes new sections on language models, cross-language information retrieval, peer-to-peer processing, XML search, mediators, and duplicate document detection.

Languages

Types