Search (31 results, page 2 of 2)

  • × classification_ss:"ST 205"
  1. Köchert, R.: Auf der Suche im Internet : Die Etrusker-Spitzmaus - Online-Wissen effizient abrufen und nutzen (2005) 0.02
    0.020984352 = product of:
      0.08393741 = sum of:
        0.018946756 = weight(_text_:und in 1957) [ClassicSimilarity], result of:
          0.018946756 = score(doc=1957,freq=6.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.2968967 = fieldWeight in 1957, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1957)
        0.02484572 = weight(_text_:der in 1957) [ClassicSimilarity], result of:
          0.02484572 = score(doc=1957,freq=10.0), product of:
            0.06431698 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02879306 = queryNorm
            0.38630107 = fieldWeight in 1957, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1957)
        0.018946756 = weight(_text_:und in 1957) [ClassicSimilarity], result of:
          0.018946756 = score(doc=1957,freq=6.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.2968967 = fieldWeight in 1957, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1957)
        0.017077852 = weight(_text_:des in 1957) [ClassicSimilarity], result of:
          0.017077852 = score(doc=1957,freq=2.0), product of:
            0.079736836 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.02879306 = queryNorm
            0.2141777 = fieldWeight in 1957, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1957)
        0.0041203224 = weight(_text_:in in 1957) [ClassicSimilarity], result of:
          0.0041203224 = score(doc=1957,freq=2.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.10520181 = fieldWeight in 1957, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1957)
      0.25 = coord(5/20)
    
    Abstract
    Diplomphysiker Ralf Köchert hat sich mit dem Medium Internet auseinandergesetzt und möchte dem Leser dessen ursprüngliche Idee - die Suche nach Informationen auf professionelle Weise näher bringen. Wertvoll sind in diesem Zusammenhang vor allem die Ausführungen über gezieltes Suchen. Des Weiteren setzt sich der Autor mit dem Aufbau der bekannten Suchmaschine Google auseinander. Zu sätzlich wird erklärt, wie das Hypertextsystem, also die Systematik der Links im Internet funktioniert. Wer sich mit diesem Buch auseinandersetzt, wird künftig bei der täglichen Recherche wohl einige Zeit und Energie sparen.
  2. Hübener, M.: Suchmaschinenoptimierung kompakt : anwendungsorientierte Techniken für die Praxis (2009) 0.02
    0.020843595 = product of:
      0.08337438 = sum of:
        0.016240077 = weight(_text_:und in 3911) [ClassicSimilarity], result of:
          0.016240077 = score(doc=3911,freq=6.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.2544829 = fieldWeight in 3911, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=3911)
        0.025198158 = weight(_text_:der in 3911) [ClassicSimilarity], result of:
          0.025198158 = score(doc=3911,freq=14.0), product of:
            0.06431698 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02879306 = queryNorm
            0.3917808 = fieldWeight in 3911, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.046875 = fieldNorm(doc=3911)
        0.016240077 = weight(_text_:und in 3911) [ClassicSimilarity], result of:
          0.016240077 = score(doc=3911,freq=6.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.2544829 = fieldWeight in 3911, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=3911)
        0.020701483 = weight(_text_:des in 3911) [ClassicSimilarity], result of:
          0.020701483 = score(doc=3911,freq=4.0), product of:
            0.079736836 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.02879306 = queryNorm
            0.25962257 = fieldWeight in 3911, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.046875 = fieldNorm(doc=3911)
        0.0049945856 = weight(_text_:in in 3911) [ClassicSimilarity], result of:
          0.0049945856 = score(doc=3911,freq=4.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.12752387 = fieldWeight in 3911, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3911)
      0.25 = coord(5/20)
    
    Abstract
    Methoden der Suchmaschinenoptimierung werden im vorliegenden Buch umfassend beleuchtet. Nach einer Einführung in das Thema besteht ein erster Schwerpunkt darin, konkrete Handlungsanweisungen für die Suchmaschinenoptimierung einer Website aufzuzeigen. Dazu wird ein Optimierungszyklus in neun Schritten vorgestellt, welcher die Felder OffPage-Optimierung, OnPage-Optimierung und Keyword-Recherche einschließt. Darüber hinaus führt der Autor die zusätzliche Kategorie der Content-Strategie ein, um die Quellen und Verbreitungswege potentieller neuer Inhalte zu systematisieren. Um die Anschaulichkeit und den Praxisbezug noch zu erhöhen, stellt der Autor eine konkrete Anwendung des vorgestellten Optimierungszyklus am Beispiel einer real existierenden Website vor.
    Content
    Einleitung - Grundlagen - Die Funktionsweise der Suchmaschinen - Die Besonderheiten der Suchmaschine Google - Multimedia im World Wide Web - Die Strukturierung einer Internetpräsenz - Der 9-Punkte-Optimierungsplan - Die Anwendung des Optimierungsplans am Beispiel www.still.de - Zusammenfassung
  3. Block, C.H.: ¬Das Intranet : die neue Informationsverarbeitung (2004) 0.02
    0.018249594 = product of:
      0.072998375 = sum of:
        0.010938915 = weight(_text_:und in 2396) [ClassicSimilarity], result of:
          0.010938915 = score(doc=2396,freq=2.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.17141339 = fieldWeight in 2396, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2396)
        0.015713813 = weight(_text_:der in 2396) [ClassicSimilarity], result of:
          0.015713813 = score(doc=2396,freq=4.0), product of:
            0.06431698 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02879306 = queryNorm
            0.24431825 = fieldWeight in 2396, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2396)
        0.010938915 = weight(_text_:und in 2396) [ClassicSimilarity], result of:
          0.010938915 = score(doc=2396,freq=2.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.17141339 = fieldWeight in 2396, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2396)
        0.029579712 = weight(_text_:des in 2396) [ClassicSimilarity], result of:
          0.029579712 = score(doc=2396,freq=6.0), product of:
            0.079736836 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.02879306 = queryNorm
            0.3709667 = fieldWeight in 2396, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2396)
        0.005827016 = weight(_text_:in in 2396) [ClassicSimilarity], result of:
          0.005827016 = score(doc=2396,freq=4.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.14877784 = fieldWeight in 2396, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2396)
      0.25 = coord(5/20)
    
    Footnote
    Rez. in: Wechselwirkung 26(2004) Nr.128, S.110: "Dieses Buch zeigt die vielfältigen Möglichkeiten, die ein Intranet-Einsatz dem Unternehmen bietet - angefangen vom Firmeninformations system bis zum Einsatz in Form des Extranet als Basis für Workflow und Wissensmanagement. Dabei wird deutlich, dass ein Intranet von enormer strategischer Bedeutung für die Informationsverarbeitung des gesamten Unternehmens ist. Der Autor Carl Hans Block behandelt neben den Grundlagen, die ein Intranet-Einsatz erfordert, insbesondere das praktische Vorgehen sowohl bei der Erstellung als auch beim laufenden Betrieb des Intranet."
  4. Hüsken, P.: Informationssuche im Semantic Web : Methoden des Information Retrieval für die Wissensrepräsentation (2006) 0.02
    0.01639928 = product of:
      0.06559712 = sum of:
        0.009376213 = weight(_text_:und in 4332) [ClassicSimilarity], result of:
          0.009376213 = score(doc=4332,freq=2.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.14692576 = fieldWeight in 4332, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=4332)
        0.016496068 = weight(_text_:der in 4332) [ClassicSimilarity], result of:
          0.016496068 = score(doc=4332,freq=6.0), product of:
            0.06431698 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02879306 = queryNorm
            0.25648075 = fieldWeight in 4332, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.046875 = fieldNorm(doc=4332)
        0.009376213 = weight(_text_:und in 4332) [ClassicSimilarity], result of:
          0.009376213 = score(doc=4332,freq=2.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.14692576 = fieldWeight in 4332, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=4332)
        0.025354039 = weight(_text_:des in 4332) [ClassicSimilarity], result of:
          0.025354039 = score(doc=4332,freq=6.0), product of:
            0.079736836 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.02879306 = queryNorm
            0.31797147 = fieldWeight in 4332, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.046875 = fieldNorm(doc=4332)
        0.0049945856 = weight(_text_:in in 4332) [ClassicSimilarity], result of:
          0.0049945856 = score(doc=4332,freq=4.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.12752387 = fieldWeight in 4332, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4332)
      0.25 = coord(5/20)
    
    Abstract
    Das Semantic Web bezeichnet ein erweitertes World Wide Web (WWW), das die Bedeutung von präsentierten Inhalten in neuen standardisierten Sprachen wie RDF Schema und OWL modelliert. Diese Arbeit befasst sich mit dem Aspekt des Information Retrieval, d.h. es wird untersucht, in wie weit Methoden der Informationssuche sich auf modelliertes Wissen übertragen lassen. Die kennzeichnenden Merkmale von IR-Systemen wie vage Anfragen sowie die Unterstützung unsicheren Wissens werden im Kontext des Semantic Web behandelt. Im Fokus steht die Suche nach Fakten innerhalb einer Wissensdomäne, die entweder explizit modelliert sind oder implizit durch die Anwendung von Inferenz abgeleitet werden können. Aufbauend auf der an der Universität Duisburg-Essen entwickelten Retrievalmaschine PIRE wird die Anwendung unsicherer Inferenz mit probabilistischer Prädikatenlogik (pDatalog) implementiert.
  5. Horch, A.; Kett, H.; Weisbecker, A.: Semantische Suchsysteme für das Internet : Architekturen und Komponenten semantischer Suchmaschinen (2013) 0.01
    0.014706186 = product of:
      0.07353093 = sum of:
        0.02344053 = weight(_text_:und in 4063) [ClassicSimilarity], result of:
          0.02344053 = score(doc=4063,freq=18.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.3673144 = fieldWeight in 4063, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4063)
        0.0194408 = weight(_text_:der in 4063) [ClassicSimilarity], result of:
          0.0194408 = score(doc=4063,freq=12.0), product of:
            0.06431698 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02879306 = queryNorm
            0.30226544 = fieldWeight in 4063, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4063)
        0.02344053 = weight(_text_:und in 4063) [ClassicSimilarity], result of:
          0.02344053 = score(doc=4063,freq=18.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.3673144 = fieldWeight in 4063, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4063)
        0.0072090626 = weight(_text_:in in 4063) [ClassicSimilarity], result of:
          0.0072090626 = score(doc=4063,freq=12.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.18406484 = fieldWeight in 4063, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4063)
      0.2 = coord(4/20)
    
    Abstract
    In der heutigen Zeit nimmt die Flut an Informationen exponentiell zu. In dieser »Informationsexplosion« entsteht täglich eine unüberschaubare Menge an neuen Informationen im Web: Beispielsweise 430 deutschsprachige Artikel bei Wikipedia, 2,4 Mio. Tweets bei Twitter und 12,2 Mio. Kommentare bei Facebook. Während in Deutschland vor einigen Jahren noch Google als nahezu einzige Suchmaschine beim Zugriff auf Informationen im Web genutzt wurde, nehmen heute die u.a. in Social Media veröffentlichten Meinungen und damit die Vorauswahl sowie Bewertung von Informationen einzelner Experten und Meinungsführer an Bedeutung zu. Aber wie können themenspezifische Informationen nun effizient für konkrete Fragestellungen identifiziert und bedarfsgerecht aufbereitet und visualisiert werden? Diese Studie gibt einen Überblick über semantische Standards und Formate, die Prozesse der semantischen Suche, Methoden und Techniken semantischer Suchsysteme, Komponenten zur Entwicklung semantischer Suchmaschinen sowie den Aufbau bestehender Anwendungen. Die Studie erläutert den prinzipiellen Aufbau semantischer Suchsysteme und stellt Methoden der semantischen Suche vor. Zudem werden Softwarewerkzeuge vorgestellt, mithilfe derer einzelne Funktionalitäten von semantischen Suchmaschinen realisiert werden können. Abschließend erfolgt die Betrachtung bestehender semantischer Suchmaschinen zur Veranschaulichung der Unterschiede der Systeme im Aufbau sowie in der Funktionalität.
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  6. Widhalm, R.; Mück, T.: Topic maps : Semantische Suche im Internet (2002) 0.01
    0.012127953 = product of:
      0.04851181 = sum of:
        0.010826718 = weight(_text_:und in 4731) [ClassicSimilarity], result of:
          0.010826718 = score(doc=4731,freq=6.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.16965526 = fieldWeight in 4731, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=4731)
        0.0089793205 = weight(_text_:der in 4731) [ClassicSimilarity], result of:
          0.0089793205 = score(doc=4731,freq=4.0), product of:
            0.06431698 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02879306 = queryNorm
            0.13961042 = fieldWeight in 4731, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.03125 = fieldNorm(doc=4731)
        0.010826718 = weight(_text_:und in 4731) [ClassicSimilarity], result of:
          0.010826718 = score(doc=4731,freq=6.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.16965526 = fieldWeight in 4731, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=4731)
        0.013800989 = weight(_text_:des in 4731) [ClassicSimilarity], result of:
          0.013800989 = score(doc=4731,freq=4.0), product of:
            0.079736836 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.02879306 = queryNorm
            0.17308173 = fieldWeight in 4731, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.03125 = fieldNorm(doc=4731)
        0.0040780623 = weight(_text_:in in 4731) [ClassicSimilarity], result of:
          0.0040780623 = score(doc=4731,freq=6.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.1041228 = fieldWeight in 4731, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=4731)
      0.25 = coord(5/20)
    
    Abstract
    Das Werk behandelt die aktuellen Entwicklungen zur inhaltlichen Erschließung von Informationsquellen im Internet. Topic Maps, semantische Modelle vernetzter Informationsressourcen unter Verwendung von XML bzw. HyTime, bieten alle notwendigen Modellierungskonstrukte, um Dokumente im Internet zu klassifizieren und ein assoziatives, semantisches Netzwerk über diese zu legen. Neben Einführungen in XML, XLink, XPointer sowie HyTime wird anhand von Einsatzszenarien gezeigt, wie diese neuartige Technologie für Content Management und Information Retrieval im Internet funktioniert. Der Entwurf einer Abfragesprache wird ebenso skizziert wie der Prototyp einer intelligenten Suchmaschine. Das Buch zeigt, wie Topic Maps den Weg zu semantisch gesteuerten Suchprozessen im Internet weisen.
    Content
    Topic Maps - Einführung in den ISO Standard (Topics, Associations, Scopes, Facets, Topic Maps).- Grundlagen von XML (Aufbau, Bestandteile, Element- und Attributdefinitionen, DTD, XLink, XPointer).- Wie entsteht ein Heringsschmaus? Konkretes Beispiel einer Topic Map.Topic Maps - Meta DTD. Die formale Beschreibung des Standards.- HyTime als zugrunde liegender Formalismus (Bounded Object Sets, Location Addressing, Hyperlinks in HyTime).- Prototyp eines Topic Map Repositories (Entwicklungsprozess für Topic Maps, Prototyp Spezifikation, technische Realisierung des Prototyps).- Semantisches Datenmodell zur Speicherung von Topic Maps.- Prototypische Abfragesprache für Topic Maps.- Erweiterungsvorschläge für den ISO Standard.
  7. Rosenfeld, L.; Morville, P.: Information architecture for the World Wide Web : designing large-scale Web sites (1998) 0.01
    0.0055863475 = product of:
      0.055863474 = sum of:
        0.051256813 = weight(_text_:allgemeines in 493) [ClassicSimilarity], result of:
          0.051256813 = score(doc=493,freq=4.0), product of:
            0.16427658 = queryWeight, product of:
              5.705423 = idf(docFreq=399, maxDocs=44218)
              0.02879306 = queryNorm
            0.31201532 = fieldWeight in 493, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.705423 = idf(docFreq=399, maxDocs=44218)
              0.02734375 = fieldNorm(doc=493)
        0.004606661 = weight(_text_:in in 493) [ClassicSimilarity], result of:
          0.004606661 = score(doc=493,freq=10.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.11761922 = fieldWeight in 493, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02734375 = fieldNorm(doc=493)
      0.1 = coord(2/20)
    
    Abstract
    Some web sites "work" and some don't. Good web site consultants know that you can't just jump in and start writing HTML, the same way you can't build a house by just pouring a foundation and putting up some walls. You need to know who will be using the site, and what they'll be using it for. You need some idea of what you'd like to draw their attention to during their visit. Overall, you need a strong, cohesive vision for the site that makes it both distinctive and usable. Information Architecture for the World Wide Web is about applying the principles of architecture and library science to web site design. Each web site is like a public building, available for tourists and regulars alike to breeze through at their leisure. The job of the architect is to set up the framework for the site to make it comfortable and inviting for people to visit, relax in, and perhaps even return to someday. Most books on web development concentrate either on the aesthetics or the mechanics of the site. This book is about the framework that holds the two together. With this book, you learn how to design web sites and intranets that support growth, management, and ease of use. Special attention is given to: * The process behind architecting a large, complex site * Web site hierarchy design and organization Information Architecture for the World Wide Web is for webmasters, designers, and anyone else involved in building a web site. It's for novice web designers who, from the start, want to avoid the traps that result in poorly designed sites. It's for experienced web designers who have already created sites but realize that something "is missing" from their sites and want to improve them. It's for programmers and administrators who are comfortable with HTML, CGI, and Java but want to understand how to organize their web pages into a cohesive site. The authors are two of the principals of Argus Associates, a web consulting firm. At Argus, they have created information architectures for web sites and intranets of some of the largest companies in the United States, including Chrysler Corporation, Barron's, and Dow Chemical.
    Classification
    ST 200 Informatik / Monographien / Vernetzung, verteilte Systeme / Allgemeines, Netzmanagement
    RVK
    ST 200 Informatik / Monographien / Vernetzung, verteilte Systeme / Allgemeines, Netzmanagement
  8. Croft, W.B.; Metzler, D.; Strohman, T.: Search engines : information retrieval in practice (2010) 0.00
    3.9485664E-4 = product of:
      0.007897133 = sum of:
        0.007897133 = weight(_text_:in in 2605) [ClassicSimilarity], result of:
          0.007897133 = score(doc=2605,freq=10.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.20163295 = fieldWeight in 2605, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2605)
      0.05 = coord(1/20)
    
    Abstract
    For introductory information retrieval courses at the undergraduate and graduate level in computer science, information science and computer engineering departments. Written by a leader in the field of information retrieval, Search Engines: Information Retrieval in Practice, is designed to give undergraduate students the understanding and tools they need to evaluate, compare and modify search engines. Coverage of the underlying IR and mathematical models reinforce key concepts. The book's numerous programming exercises make extensive use of Galago, a Java-based open source search engine. SUPPLEMENTS / Extensive lecture slides (in PDF and PPT format) / Solutions to selected end of chapter problems (Instructors only) / Test collections for exercises / Galago search engine
  9. Hitzler, P.; Krötzsch, M.; Rudolph, S.: Foundations of Semantic Web technologies (2010) 0.00
    2.8836253E-4 = product of:
      0.0057672504 = sum of:
        0.0057672504 = weight(_text_:in in 359) [ClassicSimilarity], result of:
          0.0057672504 = score(doc=359,freq=12.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.14725187 = fieldWeight in 359, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=359)
      0.05 = coord(1/20)
    
    Abstract
    This text introduces the standardized knowledge representation languages for modeling ontologies operating at the core of the semantic web. It covers RDF schema, Web Ontology Language (OWL), rules, query languages, the OWL 2 revision, and the forthcoming Rule Interchange Format (RIF). A 2010 CHOICE Outstanding Academic Title ! The nine chapters of the book guide the reader through the major foundational languages for the semantic Web and highlight the formal semantics. ! the book has very interesting supporting material and exercises, is oriented to W3C standards, and provides the necessary foundations for the semantic Web. It will be easy to follow by the computer scientist who already has a basic background on semantic Web issues; it will also be helpful for both self-study and teaching purposes. I recommend this book primarily as a complementary textbook for a graduate or undergraduate course in a computer science or a Web science academic program. --Computing Reviews, February 2010 This book is unique in several respects. It contains an in-depth treatment of all the major foundational languages for the Semantic Web and provides a full treatment of the underlying formal semantics, which is central to the Semantic Web effort. It is also the very first textbook that addresses the forthcoming W3C recommended standards OWL 2 and RIF. Furthermore, the covered topics and underlying concepts are easily accessible for the reader due to a clear separation of syntax and semantics ! I am confident this book will be well received and play an important role in training a larger number of students who will seek to become proficient in this growing discipline.
    Series
    Chapman & Hall/CRC textbooks in computing
  10. Manning, C.D.; Raghavan, P.; Schütze, H.: Introduction to information retrieval (2008) 0.00
    2.35447E-4 = product of:
      0.00470894 = sum of:
        0.00470894 = weight(_text_:in in 4041) [ClassicSimilarity], result of:
          0.00470894 = score(doc=4041,freq=8.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.120230645 = fieldWeight in 4041, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=4041)
      0.05 = coord(1/20)
    
    Abstract
    Class-tested and coherent, this textbook teaches information retrieval, including web search, text classification, and text clustering from basic concepts. Ideas are explained using examples and figures, making it perfect for introductory courses in information retrieval for advanced undergraduates and graduate students. Slides and additional exercises are available for lecturers. - This book provides what Salton and Van Rijsbergen both failed to achieve. Even more important, unlike some other books in IR, the authors appear to care about making the theory as accessible as possible to the reader, on occasion including short primers to certain topics or choosing to explain difficult concepts using simplified approaches. Its coverage [is] excellent, the quality of writing high and I was surprised how much I learned from reading it. I think the online resources are impressive.
    Content
    Inhalt: Boolean retrieval - The term vocabulary & postings lists - Dictionaries and tolerant retrieval - Index construction - Index compression - Scoring, term weighting & the vector space model - Computing scores in a complete search system - Evaluation in information retrieval - Relevance feedback & query expansion - XML retrieval - Probabilistic information retrieval - Language models for information retrieval - Text classification & Naive Bayes - Vector space classification - Support vector machines & machine learning on documents - Flat clustering - Hierarchical clustering - Matrix decompositions & latent semantic indexing - Web search basics - Web crawling and indexes - Link analysis Vgl. die digitale Fassung unter: http://nlp.stanford.edu/IR-book/pdf/irbookprint.pdf.
  11. Bizer, C.; Heath, T.: Linked Data : evolving the web into a global data space (2011) 0.00
    2.35447E-4 = product of:
      0.00470894 = sum of:
        0.00470894 = weight(_text_:in in 4725) [ClassicSimilarity], result of:
          0.00470894 = score(doc=4725,freq=8.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.120230645 = fieldWeight in 4725, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=4725)
      0.05 = coord(1/20)
    
    Abstract
    The World Wide Web has enabled the creation of a global information space comprising linked documents. As the Web becomes ever more enmeshed with our daily lives, there is a growing desire for direct access to raw data not currently available on the Web or bound up in hypertext documents. Linked Data provides a publishing paradigm in which not only documents, but also data, can be a first class citizen of the Web, thereby enabling the extension of the Web with a global data space based on open standards - the Web of Data. In this Synthesis lecture we provide readers with a detailed technical introduction to Linked Data. We begin by outlining the basic principles of Linked Data, including coverage of relevant aspects of Web architecture. The remainder of the text is based around two main themes - the publication and consumption of Linked Data. Drawing on a practical Linked Data scenario, we provide guidance and best practices on: architectural approaches to publishing Linked Data; choosing URIs and vocabularies to identify and describe resources; deciding what data to return in a description of a resource on the Web; methods and frameworks for automated linking of data sets; and testing and debugging approaches for Linked Data deployments. We give an overview of existing Linked Data applications and then examine the architectures that are used to consume Linked Data from the Web, alongside existing tools and frameworks that enable these. Readers can expect to gain a rich technical understanding of Linked Data fundamentals, as the basis for application development, research or further study.

Languages

  • d 24
  • e 7

Types

  • m 30
  • s 6
  • r 1
  • More… Less…

Subjects

Classifications