Search (595 results, page 1 of 30)

  • × type_ss:"el"
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.32
    0.31827995 = product of:
      0.53046656 = sum of:
        0.11935261 = product of:
          0.35805783 = sum of:
            0.35805783 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.35805783 = score(doc=1826,freq=2.0), product of:
                0.38225585 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.045087915 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
        0.053056084 = weight(_text_:web in 1826) [ClassicSimilarity], result of:
          0.053056084 = score(doc=1826,freq=2.0), product of:
            0.14714488 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.045087915 = queryNorm
            0.36057037 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
        0.35805783 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.35805783 = score(doc=1826,freq=2.0), product of:
            0.38225585 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.045087915 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.6 = coord(3/5)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.16
    0.15913998 = product of:
      0.26523328 = sum of:
        0.059676304 = product of:
          0.17902891 = sum of:
            0.17902891 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.17902891 = score(doc=4388,freq=2.0), product of:
                0.38225585 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.045087915 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
        0.026528042 = weight(_text_:web in 4388) [ClassicSimilarity], result of:
          0.026528042 = score(doc=4388,freq=2.0), product of:
            0.14714488 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.045087915 = queryNorm
            0.18028519 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.17902891 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.17902891 = score(doc=4388,freq=2.0), product of:
            0.38225585 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.045087915 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.6 = coord(3/5)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  3. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.15
    0.15277134 = product of:
      0.38192832 = sum of:
        0.09548208 = product of:
          0.28644624 = sum of:
            0.28644624 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.28644624 = score(doc=230,freq=2.0), product of:
                0.38225585 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.045087915 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
        0.28644624 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.28644624 = score(doc=230,freq=2.0), product of:
            0.38225585 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.045087915 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
      0.4 = coord(2/5)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  4. ¬Third International World Wide Web Conference, Darmstadt 1995 : [Inhaltsverzeichnis] (1995) 0.09
    0.08617244 = product of:
      0.2154311 = sum of:
        0.13120717 = weight(_text_:wide in 3458) [ClassicSimilarity], result of:
          0.13120717 = score(doc=3458,freq=10.0), product of:
            0.19977365 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.045087915 = queryNorm
            0.65677917 = fieldWeight in 3458, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=3458)
        0.084223926 = weight(_text_:web in 3458) [ClassicSimilarity], result of:
          0.084223926 = score(doc=3458,freq=14.0), product of:
            0.14714488 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.045087915 = queryNorm
            0.57238775 = fieldWeight in 3458, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3458)
      0.4 = coord(2/5)
    
    Abstract
    ANDREW, K. u. F. KAPPE: Serving information to the Web with Hyper-G; BARBIERI, K., H.M. DOERR u. D. DWYER: Creating a virtual classroom for interactive education on the Web; CAMPBELL, J.K., S.B. JONES, N.M. STEPHENS u. S. HURLEY: Constructing educational courseware using NCSA Mosaic and the World Wide Web; CATLEDGE, L.L. u. J.E. PITKOW: Characterizing browsing strategies in the World-Wide Web; CLAUSNITZER, A. u. P. VOGEL: A WWW interface to the OMNIS/Myriad literature retrieval engine; FISCHER, R. u. L. PERROCHON: IDLE: Unified W3-access to interactive information servers; FOLEY, J.D.: Visualizing the World-Wide Web with the navigational view builder; FRANKLIN, S.D. u. B. IBRAHIM: Advanced educational uses of the World-Wide Web; FUHR, N., U. PFEIFER u. T. HUYNH: Searching structured documents with the enhanced retrieval functionality of free WAIS-sf and SFgate; FIORITO, M., J. OKSANEN u. D.R. IOIVANE: An educational environment using WWW; KENT, R.E. u. C. NEUSS: Conceptual analysis of resource meta-information; SHELDON, M.A. u. R. WEISS: Discover: a resource discovery system based on content routing; WINOGRAD, T.: Beyond browsing: shared comments, SOAPs, Trails, and On-line communities
  5. Darnton, R.: Im Besitz des Wissens : Von der Gelehrtenrepublik des 18. Jahrhunderts zum digitalen Google-Monopol (2009) 0.08
    0.08297417 = product of:
      0.13829029 = sum of:
        0.058677625 = weight(_text_:wide in 2335) [ClassicSimilarity], result of:
          0.058677625 = score(doc=2335,freq=2.0), product of:
            0.19977365 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.045087915 = queryNorm
            0.29372054 = fieldWeight in 2335, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=2335)
        0.03183365 = weight(_text_:web in 2335) [ClassicSimilarity], result of:
          0.03183365 = score(doc=2335,freq=2.0), product of:
            0.14714488 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.045087915 = queryNorm
            0.21634221 = fieldWeight in 2335, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2335)
        0.04777901 = product of:
          0.09555802 = sum of:
            0.09555802 = weight(_text_:suchmaschine in 2335) [ClassicSimilarity], result of:
              0.09555802 = score(doc=2335,freq=2.0), product of:
                0.25493854 = queryWeight, product of:
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.045087915 = queryNorm
                0.37482765 = fieldWeight in 2335, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2335)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Wie eine gigantische Informationslandschaft tut sich das Internet vor unseren Augen auf. Und seit sich Google im Herbst letzten Jahres mit den Autoren und Verlegern, die die große Suchmaschine wegen Urheberrechtsverletzung verklagt hatten, auf einen Vergleich geeinigt hat, stellt sich die Frage nach der Orientierung im World Wide Web mit neuer Dringlichkeit. Während der letzten vier Jahre hat Google Millionen von Büchern, darunter zahllose urheberrechtlich geschützte Werke, aus den Beständen großer Forschungsbibliotheken digitalisiert und für die Onlinesuche ins Netz gestellt. Autoren und Verleger machten dagegen geltend, dass die Digitalisierung eine Copyrightverletzung darstelle. Nach langwierigen Verhandlungen einigte man sich auf eine Regelung, die gravierende Auswirkungen darauf haben wird, wie Bücher den Weg zu ihren Lesern finden. . . .
  6. Körber, S.: Suchmuster erfahrener und unerfahrener Suchmaschinennutzer im deutschsprachigen World Wide Web (2000) 0.08
    0.08227597 = product of:
      0.13712661 = sum of:
        0.0553218 = weight(_text_:wide in 5938) [ClassicSimilarity], result of:
          0.0553218 = score(doc=5938,freq=4.0), product of:
            0.19977365 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.045087915 = queryNorm
            0.2769224 = fieldWeight in 5938, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=5938)
        0.036758333 = weight(_text_:web in 5938) [ClassicSimilarity], result of:
          0.036758333 = score(doc=5938,freq=6.0), product of:
            0.14714488 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.045087915 = queryNorm
            0.24981049 = fieldWeight in 5938, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=5938)
        0.045046482 = product of:
          0.090092964 = sum of:
            0.090092964 = weight(_text_:suchmaschine in 5938) [ClassicSimilarity], result of:
              0.090092964 = score(doc=5938,freq=4.0), product of:
                0.25493854 = queryWeight, product of:
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.045087915 = queryNorm
                0.3533909 = fieldWeight in 5938, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5938)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    In einem Labor-Experiment wurden insgesamt achtzehn Studenten und Studentinnen mit zwei offenen Web-Rechercheaufgaben konfrontiert. Während deren Bewältigung mit einer Suchmaschine wurden sie per Proxy-Logfile-Protokollierung verdeckt beobachtet. Sie machten demographische und ihre Webnutzungs-Gewohnheiten betreffende Angaben, bewerteten Aufgaben-, Performance- und Suchmaschinen-Eigenschaften in Fragebögen und stellten sich einem Multiple-Choice-Test zu ihrem Wissen über Suchmaschinen. Die Versuchspersonen wurden gezielt angeworben und eingeteilt: in eine erfahrene und eine unerfahrene Untergruppe mit je neun Teilnehmern. Die Untersuchung beruht auf dem Vergleich der beiden Gruppen: Im Zentrum stehen dabei die Lesezeichen, die sie als Lösungen ablegten, ihre Einschätzungen aus den Fragebögen, ihre Suchphrasen sowie die Muster ihrer Suchmaschinen-Interaktion und Navigation in Zielseiten. Diese aus den Logfiles gewonnen sequentiellen Aktionsmuster wurden vergleichend visualisiert, ausgezählt und interpretiert. Zunächst wird das World Wide Web als strukturell und inhaltlich komplexer Informationsraum beschrieben. Daraufhin beleuchtet der Autor die allgemeinen Aufgaben und Typen von Meta-Medienanwendungen, sowie die Komponenten Index-basierter Suchmaschinen. Im Anschluß daran wechselt die Perspektive von der strukturell-medialen Seite hin zu Nutzungsaspekten. Der Autor beschreibt Nutzung von Meta-Medienanwendungen als Ko-Selektion zwischen Nutzer und Suchmaschine auf der Basis von Entscheidungen und entwickelt ein einfaches, dynamisches Phasenmodell. Der Einfluß unterschiedlicher Wissensarten auf den Selektionsprozeß findet hier Beachtung.Darauf aufbauend werden im folgenden Schritt allgemeine Forschungsfragen und Hypothesen für das Experiment formuliert. Dessen Eigenschaften sind das anschließende Thema, wobei das Beobachtungsinstrument Logfile-Analyse, die Wahl des Suchdienstes, die Formulierung der Aufgaben, Ausarbeitung der Fragebögen und der Ablauf im Zentrum stehen. Im folgenden präsentiert der Autor die Ergebnisse in drei Schwerpunkten: erstens in bezug auf die Performance - was die Prüfung der Hypothesen erlaubt - zweitens in bezug auf die Bewertungen, Kommentare und Suchphrasen der Versuchspersonen und drittens in bezug auf die visuelle und rechnerische Auswertung der Suchmuster. Letztere erlauben einen Einblick in das Suchverhalten der Versuchspersonen. Zusammenfassende Interpretationen und ein Ausblick schließen die Arbeit ab
  7. Leighton, H.V.: Performance of four World Wide Web (WWW) index services : Infoseek, Lycos, WebCrawler and WWWWorm (1995) 0.07
    0.07240903 = product of:
      0.18102255 = sum of:
        0.11735525 = weight(_text_:wide in 3168) [ClassicSimilarity], result of:
          0.11735525 = score(doc=3168,freq=2.0), product of:
            0.19977365 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.045087915 = queryNorm
            0.5874411 = fieldWeight in 3168, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.09375 = fieldNorm(doc=3168)
        0.0636673 = weight(_text_:web in 3168) [ClassicSimilarity], result of:
          0.0636673 = score(doc=3168,freq=2.0), product of:
            0.14714488 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.045087915 = queryNorm
            0.43268442 = fieldWeight in 3168, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=3168)
      0.4 = coord(2/5)
    
  8. World Wide Web JAVA : die revolutionäre Programmiersprache nicht nur für das Internet (1996) 0.07
    0.07240903 = product of:
      0.18102255 = sum of:
        0.11735525 = weight(_text_:wide in 5222) [ClassicSimilarity], result of:
          0.11735525 = score(doc=5222,freq=2.0), product of:
            0.19977365 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.045087915 = queryNorm
            0.5874411 = fieldWeight in 5222, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.09375 = fieldNorm(doc=5222)
        0.0636673 = weight(_text_:web in 5222) [ClassicSimilarity], result of:
          0.0636673 = score(doc=5222,freq=2.0), product of:
            0.14714488 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.045087915 = queryNorm
            0.43268442 = fieldWeight in 5222, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=5222)
      0.4 = coord(2/5)
    
  9. Laaff, M.: Googles genialer Urahn (2011) 0.07
    0.07140183 = product of:
      0.11900304 = sum of:
        0.024449011 = weight(_text_:wide in 4610) [ClassicSimilarity], result of:
          0.024449011 = score(doc=4610,freq=2.0), product of:
            0.19977365 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.045087915 = queryNorm
            0.122383565 = fieldWeight in 4610, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4610)
        0.022973958 = weight(_text_:web in 4610) [ClassicSimilarity], result of:
          0.022973958 = score(doc=4610,freq=6.0), product of:
            0.14714488 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.045087915 = queryNorm
            0.15613155 = fieldWeight in 4610, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4610)
        0.071580075 = sum of:
          0.056308102 = weight(_text_:suchmaschine in 4610) [ClassicSimilarity], result of:
            0.056308102 = score(doc=4610,freq=4.0), product of:
              0.25493854 = queryWeight, product of:
                5.6542544 = idf(docFreq=420, maxDocs=44218)
                0.045087915 = queryNorm
              0.22086932 = fieldWeight in 4610, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.6542544 = idf(docFreq=420, maxDocs=44218)
                0.01953125 = fieldNorm(doc=4610)
          0.015271976 = weight(_text_:22 in 4610) [ClassicSimilarity], result of:
            0.015271976 = score(doc=4610,freq=2.0), product of:
              0.1578902 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045087915 = queryNorm
              0.09672529 = fieldWeight in 4610, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.01953125 = fieldNorm(doc=4610)
      0.6 = coord(3/5)
    
    Abstract
    Er plante ein mechanisches Gehirn und kabellose Kommunikation: Der belgische Bibliothekar Paul Otlet entwarf Anfang des 20. Jahrhunderts die erste Suchmaschine der Welt - lange vor Computern und Internet. Warum gerieten seine revolutionären Ideen in Vergessenheit?
    Content
    "Die erste Suchmaschine der Welt ist aus Holz und Papier gebaut. Mannshohe, dunkelbraune Schränke reihen sich aneinander, darin Zettelkästen mit Karteikarten. "Sechzehn Millionen Karteikarten", sagt Jaques Gillen und legt die Hand auf den Griff eines Schrankes. Gillen ist Archivar im Mundaneum - der Institution, die diesen gigantischen Katalog in den zwanziger Jahren des vergangenen Jahrhunderts betrieb. Anfragen gingen per Brief oder Telegramm in Brüssel ein, bis zu 1500 im Jahr. Antworten wurden per Hand herausgesucht, das konnte Wochen dauern. Ein Papier-Google, entwickelt Jahrzehnte vor dem Internet, ohne Computer. Erfinder des Mundaneums war der belgische Bibliothekar Paul Otlet. Der gelernte Jurist aus bürgerlichem Hause wollte das Wissen der Welt kartografieren und in Holzschränken aufbewahren. Seine Vision: Das Mundaneum sollte alle Bücher erfassen, die jemals erschienen sind - und sie über ein eigens entwickeltes Archivsystem miteinander verbinden. Archivar Gillen fischt eine Karteikarte aus einem Kasten. Aus dem Zahlenwirrwarr darauf kann er dutzende Informationen über das Buch, auf das verwiesen wird, ablesen. Mit seinem Archivsystem, darin sind sich viele Forscher heute einig, hat Otlet praktisch schon um die Jahrhundertwende den Hypertext erfunden - das Netz von Verknüpfungen, die uns heute durch das Internet navigieren. "Man könnte Otlet als einen Vordenker des Internets bezeichnen", sagt Gillen und steckt die Karteikarte zurück.
    Karteikästen, Telefone, Multimedia-Möbel 1934 entwickelte Otlet die Idee eines weltweiten Wissens-"Netzes". Er versuchte, kaum dass Radio und Fernsehen erfunden waren, Multimedia-Konzepte zu entwickeln, um die Kooperationsmöglichkeiten für Forscher zu verbessern. Otlet zerbrach sich den Kopf darüber, wie Wissen über große Distanzen zugänglich gemacht werden kann. Er entwickelte Multimedia-Arbeitsmöbel, die mit Karteikästen, Telefonen und anderen Features das versuchten, was heute an jedem Rechner möglich ist. Auch ohne die Hilfe elektronischer Datenverarbeitung entwickelte er Ideen, deren Umsetzung wir heute unter Begriffen wie Web 2.0 oder Wikipedia kennen. Trotzdem sind sein Name und seine Arbeit heute weitgehend in Vergessenheit geraten. Als Vordenker von Hypertext und Internet gelten die US-Amerikaner Vannevar Bush, Ted Nelson und Douglas Engelbart. Die Überbleibsel der Mundaneum-Sammlung vermoderten jahrzehntelang auf halb verfallenen Dachböden.
    Der Traum vom dynamischen, ständig wachsenden Wissensnetz Auch, weil Otlet bereits darüber nachdachte, wie in seinem vernetzten Wissenskatalog Anmerkungen einfließen könnten, die Fehler korrigieren oder Widerspruch abbilden. Vor dieser Analogie warnt jedoch Charles van den Heuvel von der Königlichen Niederländischen Akademie der Künste und Wissenschaften: Seiner Interpretation zufolge schwebte Otlet ein System vor, in dem Wissen hierarchisch geordnet ist: Nur eine kleine Gruppe von Wissenschaftlern sollte an der Einordnung von Wissen arbeiten; Bearbeitungen und Anmerkungen sollten, anders etwa als bei der Wikipedia, nicht mit der Information verschmelzen, sondern sie lediglich ergänzen. Das Netz, das Otlet sich ausmalte, ging weit über das World Wide Web mit seiner Hypertext-Struktur hinaus. Otlet wollte nicht nur Informationen miteinander verbunden werden - die Links sollten noch zusätzlich mit Bedeutung aufgeladen werden. Viele Experten sind sich einig, dass diese Idee von Otlet viele Parallelen zu dem Konzept des "semantischen Netz" aufweist. Dessen Ziel ist es, die Bedeutung von Informationen für Rechner verwertbar zu machen - so dass Informationen von ihnen interpretiert werden und maschinell weiterverarbeitet werden können. Projekte, die sich an einer Verwirklichung des semantischen Netzes versuchen, könnten von einem Blick auf Otlets Konzepte profitieren, so van den Heuvel, von dessen Überlegungen zu Hierarchie und Zentralisierung in dieser Frage. Im Mundaneum in Mons arbeitet man derzeit daran, Otlets Arbeiten zu digitalisieren, um sie ins Netz zu stellen. Das dürfte zwar noch ziemlich lange dauern, warnt Archivar Gillen. Aber wenn es soweit ist, wird sich endlich Otlets Vision erfüllen: Seine Sammlung des Wissens wird der Welt zugänglich sein. Papierlos, für jeden abrufbar."
    Date
    24.10.2008 14:19:22
    Footnote
    Vgl. unter: http://www.spiegel.de/netzwelt/web/0,1518,768312,00.html.
  10. Resource Description Framework (RDF) (2004) 0.07
    0.06525063 = product of:
      0.16312657 = sum of:
        0.07823684 = weight(_text_:wide in 3063) [ClassicSimilarity], result of:
          0.07823684 = score(doc=3063,freq=2.0), product of:
            0.19977365 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.045087915 = queryNorm
            0.3916274 = fieldWeight in 3063, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0625 = fieldNorm(doc=3063)
        0.08488973 = weight(_text_:web in 3063) [ClassicSimilarity], result of:
          0.08488973 = score(doc=3063,freq=8.0), product of:
            0.14714488 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.045087915 = queryNorm
            0.5769126 = fieldWeight in 3063, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=3063)
      0.4 = coord(2/5)
    
    Abstract
    The Resource Description Framework (RDF) integrates a variety of applications from library catalogs and world-wide directories to syndication and aggregation of news, software, and content to personal collections of music, photos, and events using XML as an interchange syntax. The RDF specifications provide a lightweight ontology system to support the exchange of knowledge on the Web. The W3C Semantic Web Activity Statement explains W3C's plans for RDF, including the RDF Core WG, Web Ontology and the RDF Interest Group.
    Theme
    Semantic Web
  11. Boldi, P.; Santini, M.; Vigna, S.: PageRank as a function of the damping factor (2005) 0.06
    0.06101179 = product of:
      0.101686314 = sum of:
        0.048898023 = weight(_text_:wide in 2564) [ClassicSimilarity], result of:
          0.048898023 = score(doc=2564,freq=2.0), product of:
            0.19977365 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.045087915 = queryNorm
            0.24476713 = fieldWeight in 2564, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2564)
        0.03751632 = weight(_text_:web in 2564) [ClassicSimilarity], result of:
          0.03751632 = score(doc=2564,freq=4.0), product of:
            0.14714488 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.045087915 = queryNorm
            0.25496176 = fieldWeight in 2564, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2564)
        0.015271976 = product of:
          0.030543951 = sum of:
            0.030543951 = weight(_text_:22 in 2564) [ClassicSimilarity], result of:
              0.030543951 = score(doc=2564,freq=2.0), product of:
                0.1578902 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045087915 = queryNorm
                0.19345059 = fieldWeight in 2564, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2564)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    PageRank is defined as the stationary state of a Markov chain. The chain is obtained by perturbing the transition matrix induced by a web graph with a damping factor alpha that spreads uniformly part of the rank. The choice of alpha is eminently empirical, and in most cases the original suggestion alpha=0.85 by Brin and Page is still used. Recently, however, the behaviour of PageRank with respect to changes in alpha was discovered to be useful in link-spam detection. Moreover, an analytical justification of the value chosen for alpha is still missing. In this paper, we give the first mathematical analysis of PageRank when alpha changes. In particular, we show that, contrarily to popular belief, for real-world graphs values of alpha close to 1 do not give a more meaningful ranking. Then, we give closed-form formulae for PageRank derivatives of any order, and an extension of the Power Method that approximates them with convergence O(t**k*alpha**t) for the k-th derivative. Finally, we show a tight connection between iterated computation and analytical behaviour by proving that the k-th iteration of the Power Method gives exactly the PageRank value obtained using a Maclaurin polynomial of degree k. The latter result paves the way towards the application of analytical methods to the study of PageRank.
    Date
    16. 1.2016 10:22:28
    Source
    http://vigna.di.unimi.it/ftp/papers/PageRankAsFunction.pdf [Proceedings of the ACM World Wide Web Conference (WWW), 2005]
  12. Bechhofer, S.; Harmelen, F. van; Hendler, J.; Horrocks, I.; McGuinness, D.L.; Patel-Schneider, P.F.; Stein, L.A.: OWL Web Ontology Language Reference (2004) 0.06
    0.06060126 = product of:
      0.15150315 = sum of:
        0.06845724 = weight(_text_:wide in 4684) [ClassicSimilarity], result of:
          0.06845724 = score(doc=4684,freq=2.0), product of:
            0.19977365 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.045087915 = queryNorm
            0.342674 = fieldWeight in 4684, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4684)
        0.08304591 = weight(_text_:web in 4684) [ClassicSimilarity], result of:
          0.08304591 = score(doc=4684,freq=10.0), product of:
            0.14714488 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.045087915 = queryNorm
            0.5643819 = fieldWeight in 4684, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4684)
      0.4 = coord(2/5)
    
    Abstract
    The Web Ontology Language OWL is a semantic markup language for publishing and sharing ontologies on the World Wide Web. OWL is developed as a vocabulary extension of RDF (the Resource Description Framework) and is derived from the DAML+OIL Web Ontology Language. This document contains a structured informal description of the full set of OWL language constructs and is meant to serve as a reference for OWL users who want to construct OWL ontologies.
    Theme
    Semantic Web
  13. Wright, H.: Semantic Web and ontologies (2018) 0.06
    0.06060126 = product of:
      0.15150315 = sum of:
        0.06845724 = weight(_text_:wide in 80) [ClassicSimilarity], result of:
          0.06845724 = score(doc=80,freq=2.0), product of:
            0.19977365 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.045087915 = queryNorm
            0.342674 = fieldWeight in 80, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=80)
        0.08304591 = weight(_text_:web in 80) [ClassicSimilarity], result of:
          0.08304591 = score(doc=80,freq=10.0), product of:
            0.14714488 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.045087915 = queryNorm
            0.5643819 = fieldWeight in 80, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=80)
      0.4 = coord(2/5)
    
    Abstract
    The Semantic Web and ontologies can help archaeologists combine and share data, making it more open and useful. Archaeologists create diverse types of data, using a wide variety of technologies and methodologies. Like all research domains, these data are increasingly digital. The creation of data that are now openly and persistently available from disparate sources has also inspired efforts to bring archaeological resources together and make them more interoperable. This allows functionality such as federated cross-search across different datasets, and the mapping of heterogeneous data to authoritative structures to build a single data source. Ontologies provide the structure and relationships for Semantic Web data, and have been developed for use in cultural heritage applications generally, and archaeology specifically. A variety of online resources for archaeology now incorporate Semantic Web principles and technologies.
    Theme
    Semantic Web
  14. Wätjen, H.-J.: Automatisches Sammeln, Klassifizieren und Indexieren von wissenschaftlich relevanten Informationsressourcen im deutschen World Wide Web : das DFG-Projekt GERHARD (1998) 0.06
    0.06034085 = product of:
      0.15085213 = sum of:
        0.097796045 = weight(_text_:wide in 3066) [ClassicSimilarity], result of:
          0.097796045 = score(doc=3066,freq=2.0), product of:
            0.19977365 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.045087915 = queryNorm
            0.48953426 = fieldWeight in 3066, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.078125 = fieldNorm(doc=3066)
        0.053056084 = weight(_text_:web in 3066) [ClassicSimilarity], result of:
          0.053056084 = score(doc=3066,freq=2.0), product of:
            0.14714488 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.045087915 = queryNorm
            0.36057037 = fieldWeight in 3066, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.078125 = fieldNorm(doc=3066)
      0.4 = coord(2/5)
    
  15. Reiner, U.: Automatische DDC-Klassifizierung bibliografischer Titeldatensätze der Deutschen Nationalbibliografie (2009) 0.06
    0.058531463 = product of:
      0.09755243 = sum of:
        0.0553218 = weight(_text_:wide in 3284) [ClassicSimilarity], result of:
          0.0553218 = score(doc=3284,freq=4.0), product of:
            0.19977365 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.045087915 = queryNorm
            0.2769224 = fieldWeight in 3284, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=3284)
        0.030013055 = weight(_text_:web in 3284) [ClassicSimilarity], result of:
          0.030013055 = score(doc=3284,freq=4.0), product of:
            0.14714488 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.045087915 = queryNorm
            0.2039694 = fieldWeight in 3284, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=3284)
        0.01221758 = product of:
          0.02443516 = sum of:
            0.02443516 = weight(_text_:22 in 3284) [ClassicSimilarity], result of:
              0.02443516 = score(doc=3284,freq=2.0), product of:
                0.1578902 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045087915 = queryNorm
                0.15476047 = fieldWeight in 3284, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3284)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Die Menge der zu klassifizierenden Veröffentlichungen steigt spätestens seit der Existenz des World Wide Web schneller an, als sie intellektuell sachlich erschlossen werden kann. Daher werden Verfahren gesucht, um die Klassifizierung von Textobjekten zu automatisieren oder die intellektuelle Klassifizierung zumindest zu unterstützen. Seit 1968 gibt es Verfahren zur automatischen Dokumentenklassifizierung (Information Retrieval, kurz: IR) und seit 1992 zur automatischen Textklassifizierung (ATC: Automated Text Categorization). Seit immer mehr digitale Objekte im World Wide Web zur Verfügung stehen, haben Arbeiten zur automatischen Textklassifizierung seit ca. 1998 verstärkt zugenommen. Dazu gehören seit 1996 auch Arbeiten zur automatischen DDC-Klassifizierung bzw. RVK-Klassifizierung von bibliografischen Titeldatensätzen und Volltextdokumenten. Bei den Entwicklungen handelt es sich unseres Wissens bislang um experimentelle und keine im ständigen Betrieb befindlichen Systeme. Auch das VZG-Projekt Colibri/DDC ist seit 2006 u. a. mit der automatischen DDC-Klassifizierung befasst. Die diesbezüglichen Untersuchungen und Entwicklungen dienen zur Beantwortung der Forschungsfrage: "Ist es möglich, eine inhaltlich stimmige DDC-Titelklassifikation aller GVK-PLUS-Titeldatensätze automatisch zu erzielen?"
    Date
    22. 1.2010 14:41:24
  16. RDF Primer : W3C Recommendation 10 February 2004 (2004) 0.06
    0.055305183 = product of:
      0.13826296 = sum of:
        0.07823684 = weight(_text_:wide in 3064) [ClassicSimilarity], result of:
          0.07823684 = score(doc=3064,freq=2.0), product of:
            0.19977365 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.045087915 = queryNorm
            0.3916274 = fieldWeight in 3064, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0625 = fieldNorm(doc=3064)
        0.06002611 = weight(_text_:web in 3064) [ClassicSimilarity], result of:
          0.06002611 = score(doc=3064,freq=4.0), product of:
            0.14714488 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.045087915 = queryNorm
            0.4079388 = fieldWeight in 3064, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=3064)
      0.4 = coord(2/5)
    
    Abstract
    The Resource Description Framework (RDF) is a language for representing information about resources in the World Wide Web. This Primer is designed to provide the reader with the basic knowledge required to effectively use RDF. It introduces the basic concepts of RDF and describes its XML syntax. It describes how to define RDF vocabularies using the RDF Vocabulary Description Language, and gives an overview of some deployed RDF applications. It also describes the content and purpose of other RDF specification documents.
    Theme
    Semantic Web
  17. Saabiyeh, N.: What is a good ontology semantic similarity measure that considers multiple inheritance cases of concepts? (2018) 0.05
    0.053113736 = product of:
      0.13278434 = sum of:
        0.06845724 = weight(_text_:wide in 4530) [ClassicSimilarity], result of:
          0.06845724 = score(doc=4530,freq=2.0), product of:
            0.19977365 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.045087915 = queryNorm
            0.342674 = fieldWeight in 4530, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4530)
        0.06432709 = weight(_text_:web in 4530) [ClassicSimilarity], result of:
          0.06432709 = score(doc=4530,freq=6.0), product of:
            0.14714488 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.045087915 = queryNorm
            0.43716836 = fieldWeight in 4530, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4530)
      0.4 = coord(2/5)
    
    Abstract
    I need to measure semantic similarity between CSO ontology concepts, depending on Ontology structure (concept path, depth, least common subsumer (LCS) ...). CSO (Computer Science Ontology) is a large-scale ontology of research areas. A concepts in CSO may have multiple parents/super concepts (i.e. a concept may be a child of many other concepts), e.g. : (world wide web) is parent of (semantic web) (semantics) is parent of (semantic web) I found some measures that meet my needs, but the papers proposing these measures are not cited, so i got hesitated. I also found a measure that depends on weighted edges, but multiple inheritance (super concepts) is not considered..
  18. Lischka, K.; Kremp, M.: Was der Google-Gegner weiß - und was nicht (2009) 0.05
    0.053075105 = product of:
      0.13268776 = sum of:
        0.053056084 = weight(_text_:web in 4443) [ClassicSimilarity], result of:
          0.053056084 = score(doc=4443,freq=2.0), product of:
            0.14714488 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.045087915 = queryNorm
            0.36057037 = fieldWeight in 4443, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.078125 = fieldNorm(doc=4443)
        0.07963168 = product of:
          0.15926336 = sum of:
            0.15926336 = weight(_text_:suchmaschine in 4443) [ClassicSimilarity], result of:
              0.15926336 = score(doc=4443,freq=2.0), product of:
                0.25493854 = queryWeight, product of:
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.045087915 = queryNorm
                0.62471277 = fieldWeight in 4443, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4443)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Clevere Präsentation, schwache Datenbasis: Die Suchmaschine Wolfram Alpha wurde vorab schon als "Google Killer" gehandelt - jetzt hat SPIEGEL ONLINE eine erste Version getestet. Sie weiß viel über Aspirin, versagt bei Kultur - und hält die CDU für einen Regionalflughafen.
    Source
    http://www.spiegel.de/netzwelt/web/0,1518,623122,00.html
  19. Hüsken, P.: Information Retrieval im Semantic Web (2006) 0.05
    0.051943935 = product of:
      0.12985983 = sum of:
        0.058677625 = weight(_text_:wide in 4333) [ClassicSimilarity], result of:
          0.058677625 = score(doc=4333,freq=2.0), product of:
            0.19977365 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.045087915 = queryNorm
            0.29372054 = fieldWeight in 4333, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=4333)
        0.07118221 = weight(_text_:web in 4333) [ClassicSimilarity], result of:
          0.07118221 = score(doc=4333,freq=10.0), product of:
            0.14714488 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.045087915 = queryNorm
            0.48375595 = fieldWeight in 4333, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4333)
      0.4 = coord(2/5)
    
    Abstract
    Das Semantic Web bezeichnet ein erweitertes World Wide Web (WWW), das die Bedeutung von präsentierten Inhalten in neuen standardisierten Sprachen wie RDF Schema und OWL modelliert. Diese Arbeit befasst sich mit dem Aspekt des Information Retrieval, d.h. es wird untersucht, in wie weit Methoden der Informationssuche sich auf modelliertes Wissen übertragen lassen. Die kennzeichnenden Merkmale von IR-Systemen wie vage Anfragen sowie die Unterstützung unsicheren Wissens werden im Kontext des Semantic Web behandelt. Im Fokus steht die Suche nach Fakten innerhalb einer Wissensdomäne, die entweder explizit modelliert sind oder implizit durch die Anwendung von Inferenz abgeleitet werden können. Aufbauend auf der an der Universität Duisburg-Essen entwickelten Retrievalmaschine PIRE wird die Anwendung unsicherer Inferenz mit probabilistischer Prädikatenlogik (pDatalog) implementiert.
    Theme
    Semantic Web
  20. Singh, A.; Sinha, U.; Sharma, D.k.: Semantic Web and data visualization (2020) 0.05
    0.051663037 = product of:
      0.12915759 = sum of:
        0.03911842 = weight(_text_:wide in 79) [ClassicSimilarity], result of:
          0.03911842 = score(doc=79,freq=2.0), product of:
            0.19977365 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.045087915 = queryNorm
            0.1958137 = fieldWeight in 79, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=79)
        0.090039164 = weight(_text_:web in 79) [ClassicSimilarity], result of:
          0.090039164 = score(doc=79,freq=36.0), product of:
            0.14714488 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.045087915 = queryNorm
            0.6119082 = fieldWeight in 79, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=79)
      0.4 = coord(2/5)
    
    Abstract
    With the terrific growth of data volume and data being produced every second on millions of devices across the globe, there is a desperate need to manage the unstructured data available on web pages efficiently. Semantic Web or also known as Web of Trust structures the scattered data on the Internet according to the needs of the user. It is an extension of the World Wide Web (WWW) which focuses on manipulating web data on behalf of Humans. Due to the ability of the Semantic Web to integrate data from disparate sources and hence makes it more user-friendly, it is an emerging trend. Tim Berners-Lee first introduced the term Semantic Web and since then it has come a long way to become a more intelligent and intuitive web. Data Visualization plays an essential role in explaining complex concepts in a universal manner through pictorial representation, and the Semantic Web helps in broadening the potential of Data Visualization and thus making it an appropriate combination. The objective of this chapter is to provide fundamental insights concerning the semantic web technologies and in addition to that it also elucidates the issues as well as the solutions regarding the semantic web. The purpose of this chapter is to highlight the semantic web architecture in detail while also comparing it with the traditional search system. It classifies the semantic web architecture into three major pillars i.e. RDF, Ontology, and XML. Moreover, it describes different semantic web tools used in the framework and technology. It attempts to illustrate different approaches of the semantic web search engines. Besides stating numerous challenges faced by the semantic web it also illustrates the solutions.
    Theme
    Semantic Web

Years

Languages

  • e 401
  • d 174
  • a 5
  • el 2
  • f 2
  • i 2
  • nl 1
  • More… Less…

Types

  • a 245
  • i 18
  • n 14
  • r 13
  • s 10
  • x 10
  • m 9
  • b 2
  • p 2
  • More… Less…

Themes