Search (84 results, page 1 of 5)

  • × theme_ss:"Semantisches Umfeld in Indexierung u. Retrieval"
  • × type_ss:"a"
  1. Knorz, G.; Rein, B.: Semantische Suche in einer Hochschulontologie (2005) 0.06
    0.062481705 = product of:
      0.2967881 = sum of:
        0.100716956 = weight(_text_:semantische in 1852) [ClassicSimilarity], result of:
          0.100716956 = score(doc=1852,freq=6.0), product of:
            0.13923967 = queryWeight, product of:
              5.399778 = idf(docFreq=542, maxDocs=44218)
              0.025786186 = queryNorm
            0.7233352 = fieldWeight in 1852, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.399778 = idf(docFreq=542, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1852)
        0.09762034 = weight(_text_:ontologie in 1852) [ClassicSimilarity], result of:
          0.09762034 = score(doc=1852,freq=2.0), product of:
            0.18041065 = queryWeight, product of:
              6.996407 = idf(docFreq=109, maxDocs=44218)
              0.025786186 = queryNorm
            0.54110074 = fieldWeight in 1852, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.996407 = idf(docFreq=109, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1852)
        0.08622295 = weight(_text_:suche in 1852) [ClassicSimilarity], result of:
          0.08622295 = score(doc=1852,freq=6.0), product of:
            0.12883182 = queryWeight, product of:
              4.996156 = idf(docFreq=812, maxDocs=44218)
              0.025786186 = queryNorm
            0.6692675 = fieldWeight in 1852, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.996156 = idf(docFreq=812, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1852)
        0.012227853 = product of:
          0.024455706 = sum of:
            0.024455706 = weight(_text_:22 in 1852) [ClassicSimilarity], result of:
              0.024455706 = score(doc=1852,freq=2.0), product of:
                0.09029883 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025786186 = queryNorm
                0.2708308 = fieldWeight in 1852, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1852)
          0.5 = coord(1/2)
      0.21052632 = coord(4/19)
    
    Abstract
    Ontologien werden eingesetzt, um durch semantische Fundierung insbesondere für das Dokumentenretrieval eine grundlegend bessere Basis zu haben, als dies gegenwärtiger Stand der Technik ist. Vorgestellt wird eine an der FH Darmstadt entwickelte und eingesetzte Ontologie, die den Gegenstandsbereich Hochschule sowohl breit abdecken und gleichzeitig differenziert semantisch beschreiben soll. Das Problem der semantischen Suche besteht nun darin, dass sie für Informationssuchende so einfach wie bei gängigen Suchmaschinen zu nutzen sein soll, und gleichzeitig auf der Grundlage des aufwendigen Informationsmodells hochwertige Ergebnisse liefern muss. Es wird beschrieben, welche Möglichkeiten die verwendete Software K-Infinity bereitstellt und mit welchem Konzept diese Möglichkeiten für eine semantische Suche nach Dokumenten und anderen Informationseinheiten (Personen, Veranstaltungen, Projekte etc.) eingesetzt werden.
    Date
    11. 2.2011 18:22:58
  2. Hauer, M.: Neue OPACs braucht das Land ... dandelon.com (2006) 0.03
    0.030862601 = product of:
      0.14659736 = sum of:
        0.018205952 = weight(_text_:web in 6047) [ClassicSimilarity], result of:
          0.018205952 = score(doc=6047,freq=2.0), product of:
            0.08415349 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.025786186 = queryNorm
            0.21634221 = fieldWeight in 6047, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=6047)
        0.018205952 = weight(_text_:web in 6047) [ClassicSimilarity], result of:
          0.018205952 = score(doc=6047,freq=2.0), product of:
            0.08415349 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.025786186 = queryNorm
            0.21634221 = fieldWeight in 6047, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=6047)
        0.049841963 = weight(_text_:semantische in 6047) [ClassicSimilarity], result of:
          0.049841963 = score(doc=6047,freq=2.0), product of:
            0.13923967 = queryWeight, product of:
              5.399778 = idf(docFreq=542, maxDocs=44218)
              0.025786186 = queryNorm
            0.35795808 = fieldWeight in 6047, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.399778 = idf(docFreq=542, maxDocs=44218)
              0.046875 = fieldNorm(doc=6047)
        0.06034349 = weight(_text_:suche in 6047) [ClassicSimilarity], result of:
          0.06034349 = score(doc=6047,freq=4.0), product of:
            0.12883182 = queryWeight, product of:
              4.996156 = idf(docFreq=812, maxDocs=44218)
              0.025786186 = queryNorm
            0.46838963 = fieldWeight in 6047, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.996156 = idf(docFreq=812, maxDocs=44218)
              0.046875 = fieldNorm(doc=6047)
      0.21052632 = coord(4/19)
    
    Abstract
    In dandelon.com werden im Gegensatz zu den bisherigen Federated Search-Portal-Ansätzen die Titel von Medien neu mittels intelligentCAPTURE dezentral und kollaborativ erschlossen und inhaltlich stark erweitert. intelligentCAPTURE erschließt maschinell bisher Buchinhaltsverzeichnisse, Bücher, Klappentexte, Aufsätze und Websites, übernimmt bibliografische Daten aus Bibliotheken (XML, Z.39.50), von Verlagen (ONIX + Cover Pages), Zeitschriftenagenturen (Swets) und Buchhandel (SOAP) und exportierte maschinelle Indexate und aufbereitete Dokumente an die Bibliothekskataloge (MAB, MARC, XML) oder Dokumentationssysteme, an dandelon.com und teils auch an Fachportale. Die Daten werden durch Scanning und OCR, durch Import von Dateien und Lookup auf Server und durch Web-Spidering/-Crawling gewonnen. Die Qualität der Suche in dandelon.com ist deutlich besser als in bisherigen Bibliothekssystemen. Die semantische, multilinguale Suche mit derzeit 1,2 Millionen Fachbegriffen trägt zu den guten Suchergebnissen stark bei.
  3. Ziegler, C.: Deus ex Machina : Das Web soll lernen, sich und uns zu verstehen (2002) 0.02
    0.021333933 = product of:
      0.13511491 = sum of:
        0.034329474 = weight(_text_:web in 530) [ClassicSimilarity], result of:
          0.034329474 = score(doc=530,freq=4.0), product of:
            0.08415349 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.025786186 = queryNorm
            0.4079388 = fieldWeight in 530, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=530)
        0.034329474 = weight(_text_:web in 530) [ClassicSimilarity], result of:
          0.034329474 = score(doc=530,freq=4.0), product of:
            0.08415349 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.025786186 = queryNorm
            0.4079388 = fieldWeight in 530, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=530)
        0.06645595 = weight(_text_:semantische in 530) [ClassicSimilarity], result of:
          0.06645595 = score(doc=530,freq=2.0), product of:
            0.13923967 = queryWeight, product of:
              5.399778 = idf(docFreq=542, maxDocs=44218)
              0.025786186 = queryNorm
            0.47727743 = fieldWeight in 530, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.399778 = idf(docFreq=542, maxDocs=44218)
              0.0625 = fieldNorm(doc=530)
      0.15789473 = coord(3/19)
    
    Abstract
    Das WWW ist dumm. Ein neuer Ansatz soll jetzt dafür sorgen, dass Maschinen Bedeutungen erfassen und Informationen richtig einordnen können. Das ist noch nicht alles: Wenn die Server erst mal das Verstehen gelernt haben sollten, würden sie auch in der Lage sein, uns von den Ergebnissen ihrer Plaudereien untereinander zu berichten - das 'semantische Web' wäre geboren
  4. Pahlevi, S.M.; Kitagawa, H.: Conveying taxonomy context for topic-focused Web search (2005) 0.02
    0.018849121 = product of:
      0.11937777 = sum of:
        0.048168425 = weight(_text_:web in 3310) [ClassicSimilarity], result of:
          0.048168425 = score(doc=3310,freq=14.0), product of:
            0.08415349 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.025786186 = queryNorm
            0.57238775 = fieldWeight in 3310, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3310)
        0.048168425 = weight(_text_:web in 3310) [ClassicSimilarity], result of:
          0.048168425 = score(doc=3310,freq=14.0), product of:
            0.08415349 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.025786186 = queryNorm
            0.57238775 = fieldWeight in 3310, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3310)
        0.023040922 = weight(_text_:services in 3310) [ClassicSimilarity], result of:
          0.023040922 = score(doc=3310,freq=2.0), product of:
            0.094670646 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.025786186 = queryNorm
            0.2433798 = fieldWeight in 3310, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046875 = fieldNorm(doc=3310)
      0.15789473 = coord(3/19)
    
    Abstract
    Introducing context to a user query is effective to improve the search effectiveness. In this article we propose a method employing the taxonomy-based search services such as Web directories to facilitate searches in any Web search interfaces that support Boolean queries. The proposed method enables one to convey current search context an taxonomy of a taxonomy-based search service to the searches conducted with the Web search interfaces. The basic idea is to learn the search context in the form of a Boolean condition that is commonly accepted by many Web search interfaces, and to use the condition to modify the user query before forwarding it to the Web search interfaces. To guarantee that the modified query can always be processed by the Web search interfaces and to make the method adaptive to different user requirements an search result effectiveness, we have developed new fast classification learning algorithms.
  5. Faaborg, A.; Lagoze, C.: Semantic browsing (2003) 0.02
    0.018360559 = product of:
      0.11628354 = sum of:
        0.052027844 = weight(_text_:web in 1026) [ClassicSimilarity], result of:
          0.052027844 = score(doc=1026,freq=12.0), product of:
            0.08415349 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.025786186 = queryNorm
            0.6182494 = fieldWeight in 1026, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1026)
        0.052027844 = weight(_text_:web in 1026) [ClassicSimilarity], result of:
          0.052027844 = score(doc=1026,freq=12.0), product of:
            0.08415349 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.025786186 = queryNorm
            0.6182494 = fieldWeight in 1026, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1026)
        0.012227853 = product of:
          0.024455706 = sum of:
            0.024455706 = weight(_text_:22 in 1026) [ClassicSimilarity], result of:
              0.024455706 = score(doc=1026,freq=2.0), product of:
                0.09029883 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025786186 = queryNorm
                0.2708308 = fieldWeight in 1026, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1026)
          0.5 = coord(1/2)
      0.15789473 = coord(3/19)
    
    Abstract
    We have created software applications that allow users to both author and use Semantic Web metadata. To create and use a layer of semantic content on top of the existing Web, we have (1) implemented a user interface that expedites the task of attributing metadata to resources on the Web, and (2) augmented a Web browser to leverage this semantic metadata to provide relevant information and tasks to the user. This project provides a framework for annotating and reorganizing existing files, pages, and sites on the Web that is similar to Vannevar Bushrsquos original concepts of trail blazing and associative indexing.
    Source
    Research and advanced technology for digital libraries : 7th European Conference, proceedings / ECDL 2003, Trondheim, Norway, August 17-22, 2003
    Theme
    Semantic Web
  6. Beier, H.: Vom Wort zum Wissen : Semantische Netze als Mittel gegen die Informationsflut (2004) 0.02
    0.017702704 = product of:
      0.1681757 = sum of:
        0.049841963 = weight(_text_:semantische in 2302) [ClassicSimilarity], result of:
          0.049841963 = score(doc=2302,freq=2.0), product of:
            0.13923967 = queryWeight, product of:
              5.399778 = idf(docFreq=542, maxDocs=44218)
              0.025786186 = queryNorm
            0.35795808 = fieldWeight in 2302, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.399778 = idf(docFreq=542, maxDocs=44218)
              0.046875 = fieldNorm(doc=2302)
        0.11833373 = weight(_text_:ontologie in 2302) [ClassicSimilarity], result of:
          0.11833373 = score(doc=2302,freq=4.0), product of:
            0.18041065 = queryWeight, product of:
              6.996407 = idf(docFreq=109, maxDocs=44218)
              0.025786186 = queryNorm
            0.6559132 = fieldWeight in 2302, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.996407 = idf(docFreq=109, maxDocs=44218)
              0.046875 = fieldNorm(doc=2302)
      0.10526316 = coord(2/19)
    
    Abstract
    "Thesaurus linguae latinae" - so heißt eine der frühesten Wort-Sammlungen. Seit Alters her beschäftigen sich Menschen mit der qualifizierten Aufbereitung von Information. Noch älter ist sogar das Konzept der Ontologie (wörtlich: die "Lehre vom Sein"), die sich als Disziplin der Philosophie bereits seit Aristoteles (384-322 v. Chr.) mit einer objektivistischen Beschreibung der Wirklichkeit beschäftigt. Ontologien - als Disziplin des modernen Wissensmanagements-sind eine Methode, in möglichst kompakter Form, d.h. unter Verwendung von Konzepten in verschiedenen Meta-Ebenen die reale Welt zu beschreiben. Thesaurus und Ontologie stellen zwei Konzepte dar, die auch heute noch in der Wissenschaft - und in jüngster Zeit mit zunehmender Bedeutung auch in der Wirtschaft - im Bereich des Informationsund Wissensmanagements zum Einsatz kommen. Beide spannen gewissermaßen den konzeptionellen Bogen, an dem sich ein pragmatisches Wissensmanagement heutzutage ausrichtet und sich in Form sogenannter semantischer Netze - auch Wissensnetze genannt - wiederfindet.
  7. ALEPH 500 mit multilingualem Thesaurus (2003) 0.02
    0.015277167 = product of:
      0.09675539 = sum of:
        0.018205952 = weight(_text_:web in 1639) [ClassicSimilarity], result of:
          0.018205952 = score(doc=1639,freq=2.0), product of:
            0.08415349 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.025786186 = queryNorm
            0.21634221 = fieldWeight in 1639, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1639)
        0.018205952 = weight(_text_:web in 1639) [ClassicSimilarity], result of:
          0.018205952 = score(doc=1639,freq=2.0), product of:
            0.08415349 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.025786186 = queryNorm
            0.21634221 = fieldWeight in 1639, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1639)
        0.06034349 = weight(_text_:suche in 1639) [ClassicSimilarity], result of:
          0.06034349 = score(doc=1639,freq=4.0), product of:
            0.12883182 = queryWeight, product of:
              4.996156 = idf(docFreq=812, maxDocs=44218)
              0.025786186 = queryNorm
            0.46838963 = fieldWeight in 1639, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.996156 = idf(docFreq=812, maxDocs=44218)
              0.046875 = fieldNorm(doc=1639)
      0.15789473 = coord(3/19)
    
    Abstract
    Das System ALEPH 500 (Version 14.2) bietet den Benutzern mit der Weiterentwicklung des "Multilingualen Thesaurus" verfeinerte Recherchefunktionen an, z.B. - Erhöhung der Treffsicherheit - Ausschluss von nicht zutreffenden Suchergebnissen - Aufspüren aller für die Suche relevanter Titel - Sprachunabhängige Suche - Beziehungen zwischen Begriffen. Im ALEPH 500-Web OPAC wird der Thesaurus in zwei Fenstern angezeigt. Links ist der Thesaurus-Baum mit Hierarchien und Begriffsbeziehungen abgebildet. Parallel dazu werden rechts die Informationen zum ausgewählten Deskriptor dargestellt. Von diesem Fenster aus sind weitere thesaurusbezogene Funktionen ausführbar. Der Thesaurus ist direkt mit dem Titelkatalog verknüpft. Somit kann sich der Benutzer vom gewählten Deskriptor ausgehend sofort die vorhandenen Titel im OPAC anzeigen lassen. Sowohl die Einzelrecherche über einen Deskriptor als auch die Top DownRecherche über einen Thesaurus-Baumzweig werden im Suchverlauf des Titelkatalogs mitgeführt. Die Recherche kann mit den bekannten Funktionen in ALEPH 500 erweitert, eingeschränkt, modifiziert oder als SDI-Profil abgelegt werden. Erfassung und Pflege des Thesaurusvokabublars erfolgen im Katalogisierungsmodul unter Beachtung allgemein gültiger Regeln mit Hilfe maßgeschneiderter Schablonen, die modifizierbar sind. Durch entsprechende Feldbelegungen können die vielfältigen Beziehungen eines Deskriptors abgebildet sowie Sprachvarianten hinterlegt werden. Hintergrundverknüpfungen sorgen dafür, dass sich Änderungen im Thesaurus sofort und direkt auf die bibliographischen Daten auswirken.
  8. Küssow, J.: ALEPH 500 mit multilingualem Thesaurus (2003) 0.02
    0.015277167 = product of:
      0.09675539 = sum of:
        0.018205952 = weight(_text_:web in 1640) [ClassicSimilarity], result of:
          0.018205952 = score(doc=1640,freq=2.0), product of:
            0.08415349 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.025786186 = queryNorm
            0.21634221 = fieldWeight in 1640, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1640)
        0.018205952 = weight(_text_:web in 1640) [ClassicSimilarity], result of:
          0.018205952 = score(doc=1640,freq=2.0), product of:
            0.08415349 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.025786186 = queryNorm
            0.21634221 = fieldWeight in 1640, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1640)
        0.06034349 = weight(_text_:suche in 1640) [ClassicSimilarity], result of:
          0.06034349 = score(doc=1640,freq=4.0), product of:
            0.12883182 = queryWeight, product of:
              4.996156 = idf(docFreq=812, maxDocs=44218)
              0.025786186 = queryNorm
            0.46838963 = fieldWeight in 1640, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.996156 = idf(docFreq=812, maxDocs=44218)
              0.046875 = fieldNorm(doc=1640)
      0.15789473 = coord(3/19)
    
    Abstract
    Das System ALEPH 500 (Version 14.2) bietet den Benutzern mit der Weiterentwicklung des "Multilingualen Thesaurus" verfeinerte Recherchefunktionen an, z.B. - Erhöhung der Treffsicherheit - Ausschluss von nicht zutreffenden Suchergebnissen - Aufspüren aller für die Suche relevanter Titel - Sprachunabhängige Suche - Beziehungen zwischen Begriffen. Im ALEPH 500-Web OPAC wird der Thesaurus in zwei Fenstern angezeigt. Links ist der Thesaurus-Baum mit Hierarchien und Begriffsbeziehungen abgebildet. Parallel dazu werden rechts die Informationen zum ausgewählten Deskriptor dargestellt. Von diesem Fenster aus sind weitere thesaurusbezogene Funktionen ausführbar. Der Thesaurus ist direkt mit dem Titelkatalog verknüpft. Somit kann sich der Benutzer vom gewählten Deskriptor ausgehend sofort die vorhandenen Titel im OPAC anzeigen lassen. Sowohl die Einzelrecherche über einen Deskriptor als auch die Top DownRecherche über einen Thesaurus-Baumzweig werden im Suchverlauf des Titelkatalogs mitgeführt. Die Recherche kann mit den bekannten Funktionen in ALEPH 500 erweitert, eingeschränkt, modifiziert oder als SDI-Profil abgelegt werden. Erfassung und Pflege des Thesaurusvokabublars erfolgen im Katalogisierungsmodul unter Beachtung allgemein gültiger Regeln mit Hilfe maßgeschneiderter Schablonen, die modifizierbar sind. Durch entsprechende Feldbelegungen können die vielfältigen Beziehungen eines Deskriptors abgebildet sowie Sprachvarianten hinterlegt werden. Hintergrundverknüpfungen sorgen dafür, dass sich Änderungen im Thesaurus sofort und direkt auf die bibliographischen Daten auswirken.
  9. Scholer, F.; Williams, H.E.; Turpin, A.: Query association surrogates for Web search (2004) 0.02
    0.015136535 = product of:
      0.09586473 = sum of:
        0.036411904 = weight(_text_:web in 2236) [ClassicSimilarity], result of:
          0.036411904 = score(doc=2236,freq=8.0), product of:
            0.08415349 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.025786186 = queryNorm
            0.43268442 = fieldWeight in 2236, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2236)
        0.036411904 = weight(_text_:web in 2236) [ClassicSimilarity], result of:
          0.036411904 = score(doc=2236,freq=8.0), product of:
            0.08415349 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.025786186 = queryNorm
            0.43268442 = fieldWeight in 2236, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2236)
        0.023040922 = weight(_text_:services in 2236) [ClassicSimilarity], result of:
          0.023040922 = score(doc=2236,freq=2.0), product of:
            0.094670646 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.025786186 = queryNorm
            0.2433798 = fieldWeight in 2236, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046875 = fieldNorm(doc=2236)
      0.15789473 = coord(3/19)
    
    Abstract
    Collection sizes, query rates, and the number of users of Web search engines are increasing. Therefore, there is continued demand for innovation in providing search services that meet user information needs. In this article, we propose new techniques to add additional terms to documents with the goal of providing more accurate searches. Our techniques are based an query association, where queries are stored with documents that are highly similar statistically. We show that adding query associations to documents improves the accuracy of Web topic finding searches by up to 7%, and provides an excellent complement to existing supplement techniques for site finding. We conclude that using document surrogates derived from query association is a valuable new technique for accurate Web searching.
  10. Prasad, A.R.D.; Madalli, D.P.: Faceted infrastructure for semantic digital libraries (2008) 0.01
    0.013744791 = product of:
      0.08705035 = sum of:
        0.033924792 = weight(_text_:web in 1905) [ClassicSimilarity], result of:
          0.033924792 = score(doc=1905,freq=10.0), product of:
            0.08415349 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.025786186 = queryNorm
            0.40312994 = fieldWeight in 1905, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1905)
        0.033924792 = weight(_text_:web in 1905) [ClassicSimilarity], result of:
          0.033924792 = score(doc=1905,freq=10.0), product of:
            0.08415349 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.025786186 = queryNorm
            0.40312994 = fieldWeight in 1905, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1905)
        0.019200768 = weight(_text_:services in 1905) [ClassicSimilarity], result of:
          0.019200768 = score(doc=1905,freq=2.0), product of:
            0.094670646 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.025786186 = queryNorm
            0.2028165 = fieldWeight in 1905, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1905)
      0.15789473 = coord(3/19)
    
    Abstract
    Purpose - The paper aims to argue that digital library retrieval should be based on semantic representations and propose a semantic infrastructure for digital libraries. Design/methodology/approach - The approach taken is formal model based on subject representation for digital libraries. Findings - Search engines and search techniques have fallen short of user expectations as they do not give context based retrieval. Deploying semantic web technologies would lead to efficient and more precise representation of digital library content and hence better retrieval. Though digital libraries often have metadata of information resources which can be accessed through OAI-PMH, much remains to be accomplished in making digital libraries semantic web compliant. This paper presents a semantic infrastructure for digital libraries, that will go a long way in providing them and web based information services with products highly customised to users needs. Research limitations/implications - Here only a model for semantic infrastructure is proposed. This model is proposed after studying current user-centric, top-down models adopted in digital library service architectures. Originality/value - This paper gives a generic model for building semantic infrastructure for digital libraries. Faceted ontologies for digital libraries is just one approach. But the same may be adopted by groups working with different approaches in building ontologies to realise efficient retrieval in digital libraries.
    Footnote
    Beitrag eines Themenheftes "Digital libraries and the semantic web: context, applications and research".
    Theme
    Semantic Web
  11. Chang, C.-H.; Hsu, C.-C.: Integrating query expansion and conceptual relevance feedback for personalized Web information retrieval (1998) 0.01
    0.013548369 = product of:
      0.08580634 = sum of:
        0.036789242 = weight(_text_:web in 1319) [ClassicSimilarity], result of:
          0.036789242 = score(doc=1319,freq=6.0), product of:
            0.08415349 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.025786186 = queryNorm
            0.43716836 = fieldWeight in 1319, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1319)
        0.036789242 = weight(_text_:web in 1319) [ClassicSimilarity], result of:
          0.036789242 = score(doc=1319,freq=6.0), product of:
            0.08415349 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.025786186 = queryNorm
            0.43716836 = fieldWeight in 1319, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1319)
        0.012227853 = product of:
          0.024455706 = sum of:
            0.024455706 = weight(_text_:22 in 1319) [ClassicSimilarity], result of:
              0.024455706 = score(doc=1319,freq=2.0), product of:
                0.09029883 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025786186 = queryNorm
                0.2708308 = fieldWeight in 1319, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1319)
          0.5 = coord(1/2)
      0.15789473 = coord(3/19)
    
    Abstract
    Keyword based querying has been an immediate and efficient way to specify and retrieve related information that the user inquired. However, conventional document ranking based on an automatic assessment of document relevance to the query may not be the best approach when little information is given. Proposes an idea to integrate 2 existing techniques, query expansion and relevance feedback to achieve a concept-based information search for the Web
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue devoted to the Proceedings of the 7th International World Wide Web Conference, held 14-18 April 1998, Brisbane, Australia
  12. Otto, A.: Ordnungssysteme als Wissensbasis für die Suche in textbasierten Datenbeständen : dargestellt am Beispiel einer soziologischen Bibliographie (1998) 0.01
    0.012868739 = product of:
      0.122253016 = sum of:
        0.046991456 = weight(_text_:semantische in 6625) [ClassicSimilarity], result of:
          0.046991456 = score(doc=6625,freq=4.0), product of:
            0.13923967 = queryWeight, product of:
              5.399778 = idf(docFreq=542, maxDocs=44218)
              0.025786186 = queryNorm
            0.33748612 = fieldWeight in 6625, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.399778 = idf(docFreq=542, maxDocs=44218)
              0.03125 = fieldNorm(doc=6625)
        0.075261556 = weight(_text_:suche in 6625) [ClassicSimilarity], result of:
          0.075261556 = score(doc=6625,freq=14.0), product of:
            0.12883182 = queryWeight, product of:
              4.996156 = idf(docFreq=812, maxDocs=44218)
              0.025786186 = queryNorm
            0.5841845 = fieldWeight in 6625, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              4.996156 = idf(docFreq=812, maxDocs=44218)
              0.03125 = fieldNorm(doc=6625)
      0.10526316 = coord(2/19)
    
    Abstract
    Es wird eine Methode vorgestellt, wie sich Ordnungssysteme für die Suche in textbasierten Datenbeständen verwenden lassen. "Ordnungssystem" wird hier als Oberbegriff für beliebige geordnete Begriffssammlungen verwendet. Dies sind beispielsweise Thesauri, Klassifikationen und formale Systematiken. Weil Thesauri dabei die leistungsfähigsten Ordnungssysteme sind, finden sie eine besondere Berücksichtigung. Der Beitrag ist streng praxisbezogenen und auf die Nutzerschnittstelle konzentriert. Die Basis für die Nutzerschnittstelle bilden Ordnungssysteme, die über eine WWW-Schnittstelle angeboten werden. Je nach Fachgebiet kann der Nutzer ein spezielles Ordnungssystem für die Suche auswählen. Im Unterschied zu klassischen Verfahren werden die Ordnungssysteme nicht zur ausschließlichen Suche in Deskriptorenfeldern, sondern für die Suche in einem Basic Index verwendet. In der Anwendung auf den Basic Index sind die Ordnungssysteme quasi "entkoppelt" von der ursprünglichen Datenbank und den Deskriptorenfeldern, für die das Ordnungssystem entwickelt wurde. Die Inhalte einer Datenbank spielen bei der Wahl der Ordnungssysteme zunächst keine Rolle. Sie machen sich erst bei der Suche in der Anzahl der Treffer bemerkbar: so findet ein rechtswissenschaftlicher Thesaurus natürlicherweise in einer Medizin-Datenbank weniger relevante Dokumente als in einer Rechts-Datenbank, weil das im Rechts-Thesaurus abgebildete Begriffsgut eher in einer Rechts-Datenbank zu finden ist. Das Verfahren ist modular aufgebaut und sieht in der Konzeption nachgeordnete semantische Retrievalverfahren vor, die zu einer Verbesserung von Retrievaleffektivität und -effizienz führen werden. So werden aus einer Ergebnismenge, die ausschließlich durch exakten Zeichenkettenabgleich gefunden wurde, in einem nachfolgenden Schritt durch eine semantische Analyse diejenigen Dokumente herausgefiltert, die für die Suchfrage relevant sind. Die WWW-Nutzerschnittstelle und die Verwendung bereits bestehender Ordnungssysteme führen zu einer Minimierung des Arbeitsaufwands auf Nutzerseite. Die Kosten für eine Suche lassen sich sowohl auf der Input-Seite verringern, indem eine aufwendige "manuelle" Indexierung entfällt, als auch auf der Output-Seite, indem den Nutzern leicht bedienbare Suchoptionen zur Verfügung gestellt werden
  13. Brunetti, J.M.; Roberto García, R.: User-centered design and evaluation of overview components for semantic data exploration (2014) 0.01
    0.0119441515 = product of:
      0.075646296 = sum of:
        0.034329474 = weight(_text_:web in 1626) [ClassicSimilarity], result of:
          0.034329474 = score(doc=1626,freq=16.0), product of:
            0.08415349 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.025786186 = queryNorm
            0.4079388 = fieldWeight in 1626, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=1626)
        0.034329474 = weight(_text_:web in 1626) [ClassicSimilarity], result of:
          0.034329474 = score(doc=1626,freq=16.0), product of:
            0.08415349 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.025786186 = queryNorm
            0.4079388 = fieldWeight in 1626, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=1626)
        0.0069873445 = product of:
          0.013974689 = sum of:
            0.013974689 = weight(_text_:22 in 1626) [ClassicSimilarity], result of:
              0.013974689 = score(doc=1626,freq=2.0), product of:
                0.09029883 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025786186 = queryNorm
                0.15476047 = fieldWeight in 1626, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1626)
          0.5 = coord(1/2)
      0.15789473 = coord(3/19)
    
    Abstract
    Purpose - The growing volumes of semantic data available in the web result in the need for handling the information overload phenomenon. The potential of this amount of data is enormous but in most cases it is very difficult for users to visualize, explore and use this data, especially for lay-users without experience with Semantic Web technologies. The paper aims to discuss these issues. Design/methodology/approach - The Visual Information-Seeking Mantra "Overview first, zoom and filter, then details-on-demand" proposed by Shneiderman describes how data should be presented in different stages to achieve an effective exploration. The overview is the first user task when dealing with a data set. The objective is that the user is capable of getting an idea about the overall structure of the data set. Different information architecture (IA) components supporting the overview tasks have been developed, so they are automatically generated from semantic data, and evaluated with end-users. Findings - The chosen IA components are well known to web users, as they are present in most web pages: navigation bars, site maps and site indexes. The authors complement them with Treemaps, a visualization technique for displaying hierarchical data. These components have been developed following an iterative User-Centered Design methodology. Evaluations with end-users have shown that they get easily used to them despite the fact that they are generated automatically from structured data, without requiring knowledge about the underlying semantic technologies, and that the different overview components complement each other as they focus on different information search needs. Originality/value - Obtaining semantic data sets overviews cannot be easily done with the current semantic web browsers. Overviews become difficult to achieve with large heterogeneous data sets, which is typical in the Semantic Web, because traditional IA techniques do not easily scale to large data sets. There is little or no support to obtain overview information quickly and easily at the beginning of the exploration of a new data set. This can be a serious limitation when exploring a data set for the first time, especially for lay-users. The proposal is to reuse and adapt existing IA components to provide this overview to users and show that they can be generated automatically from the thesaurus and ontologies that structure semantic data while providing a comparable user experience to traditional web sites.
    Date
    20. 1.2015 18:30:22
    Theme
    Semantic Web
  14. Mayr, P.; Schaer, P.; Mutschke, P.: ¬A science model driven retrieval prototype (2011) 0.01
    0.010669793 = product of:
      0.10136303 = sum of:
        0.05152107 = weight(_text_:services in 649) [ClassicSimilarity], result of:
          0.05152107 = score(doc=649,freq=10.0), product of:
            0.094670646 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.025786186 = queryNorm
            0.5442138 = fieldWeight in 649, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046875 = fieldNorm(doc=649)
        0.049841963 = weight(_text_:semantische in 649) [ClassicSimilarity], result of:
          0.049841963 = score(doc=649,freq=2.0), product of:
            0.13923967 = queryWeight, product of:
              5.399778 = idf(docFreq=542, maxDocs=44218)
              0.025786186 = queryNorm
            0.35795808 = fieldWeight in 649, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.399778 = idf(docFreq=542, maxDocs=44218)
              0.046875 = fieldNorm(doc=649)
      0.10526316 = coord(2/19)
    
    Abstract
    This paper is about a better understanding of the structure and dynamics of science and the usage of these insights for compensating the typical problems that arises in metadata-driven Digital Libraries. Three science model driven retrieval services are presented: co-word analysis based query expansion, re-ranking via Bradfordizing and author centrality. The services are evaluated with relevance assessments from which two important implications emerge: (1) precision values of the retrieval services are the same or better than the tf-idf retrieval baseline and (2) each service retrieved a disjoint set of documents. The different services each favor quite other - but still relevant - documents than pure term-frequency based rankings. The proposed models and derived retrieval services therefore open up new viewpoints on the scientific knowledge space and provide an alternative framework to structure scholarly information systems.
    Theme
    Semantische Interoperabilität
  15. Menczer, F.: Lexical and semantic clustering by Web links (2004) 0.01
    0.010140721 = product of:
      0.09633685 = sum of:
        0.048168425 = weight(_text_:web in 3090) [ClassicSimilarity], result of:
          0.048168425 = score(doc=3090,freq=14.0), product of:
            0.08415349 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.025786186 = queryNorm
            0.57238775 = fieldWeight in 3090, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3090)
        0.048168425 = weight(_text_:web in 3090) [ClassicSimilarity], result of:
          0.048168425 = score(doc=3090,freq=14.0), product of:
            0.08415349 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.025786186 = queryNorm
            0.57238775 = fieldWeight in 3090, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3090)
      0.10526316 = coord(2/19)
    
    Abstract
    Recent Web-searching and -mining tools are combining text and link analysis to improve ranking and crawling algorithms. The central assumption behind such approaches is that there is a correiation between the graph structure of the Web and the text and meaning of pages. Here I formalize and empirically evaluate two general conjectures drawing connections from link information to lexical and semantic Web content. The link-content conjecture states that a page is similar to the pages that link to it, and the link-cluster conjecture that pages about the same topic are clustered together. These conjectures are offen simply assumed to hold, and Web search tools are built an such assumptions. The present quantitative confirmation sheds light an the connection between the success of the latest Web-mining techniques and the small world topology of the Web, with encouraging implications for the design of better crawling algorithms.
  16. Poynder, R.: Web research engines? (1996) 0.01
    0.008570474 = product of:
      0.0814195 = sum of:
        0.04070975 = weight(_text_:web in 5698) [ClassicSimilarity], result of:
          0.04070975 = score(doc=5698,freq=10.0), product of:
            0.08415349 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.025786186 = queryNorm
            0.48375595 = fieldWeight in 5698, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=5698)
        0.04070975 = weight(_text_:web in 5698) [ClassicSimilarity], result of:
          0.04070975 = score(doc=5698,freq=10.0), product of:
            0.08415349 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.025786186 = queryNorm
            0.48375595 = fieldWeight in 5698, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=5698)
      0.10526316 = coord(2/19)
    
    Abstract
    Describes the shortcomings of search engines for the WWW comparing their current capabilities to those of the first generation CD-ROM products. Some allow phrase searching and most are improving their Boolean searching. Few allow truncation, wild cards or nested logic. They are stateless, losing previous search criteria. Unlike the indexing and classification systems for today's CD-ROMs, those for Web pages are random, unstructured and of variable quality. Considers that at best Web search engines can only offer free text searching. Discusses whether automatic data classification systems such as Infoseek Ultra can overcome the haphazard nature of the Web with neural network technology, and whether Boolean search techniques may be redundant when replaced by technology such as the Euroferret search engine. However, artificial intelligence is rarely successful on huge, varied databases. Relevance ranking and automatic query expansion still use the same simple inverted indexes. Most Web search engines do nothing more than word counting. Further complications arise with foreign languages
  17. Brandão, W.C.; Santos, R.L.T.; Ziviani, N.; Moura, E.S. de; Silva, A.S. da: Learning to expand queries using entities (2014) 0.01
    0.008154635 = product of:
      0.051646024 = sum of:
        0.021455921 = weight(_text_:web in 1343) [ClassicSimilarity], result of:
          0.021455921 = score(doc=1343,freq=4.0), product of:
            0.08415349 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.025786186 = queryNorm
            0.25496176 = fieldWeight in 1343, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1343)
        0.021455921 = weight(_text_:web in 1343) [ClassicSimilarity], result of:
          0.021455921 = score(doc=1343,freq=4.0), product of:
            0.08415349 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.025786186 = queryNorm
            0.25496176 = fieldWeight in 1343, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1343)
        0.008734181 = product of:
          0.017468361 = sum of:
            0.017468361 = weight(_text_:22 in 1343) [ClassicSimilarity], result of:
              0.017468361 = score(doc=1343,freq=2.0), product of:
                0.09029883 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025786186 = queryNorm
                0.19345059 = fieldWeight in 1343, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1343)
          0.5 = coord(1/2)
      0.15789473 = coord(3/19)
    
    Abstract
    A substantial fraction of web search queries contain references to entities, such as persons, organizations, and locations. Recently, methods that exploit named entities have been shown to be more effective for query expansion than traditional pseudorelevance feedback methods. In this article, we introduce a supervised learning approach that exploits named entities for query expansion using Wikipedia as a repository of high-quality feedback documents. In contrast with existing entity-oriented pseudorelevance feedback approaches, we tackle query expansion as a learning-to-rank problem. As a result, not only do we select effective expansion terms but we also weigh these terms according to their predicted effectiveness. To this end, we exploit the rich structure of Wikipedia articles to devise discriminative term features, including each candidate term's proximity to the original query terms, as well as its frequency across multiple article fields and in category and infobox descriptors. Experiments on three Text REtrieval Conference web test collections attest the effectiveness of our approach, with gains of up to 23.32% in terms of mean average precision, 19.49% in terms of precision at 10, and 7.86% in terms of normalized discounted cumulative gain compared with a state-of-the-art approach for entity-oriented query expansion.
    Date
    22. 8.2014 17:07:50
  18. Bilal, D.; Kirby, J.: Differences and similarities in information seeking : children and adults as Web users (2002) 0.01
    0.007665664 = product of:
      0.07282381 = sum of:
        0.036411904 = weight(_text_:web in 2591) [ClassicSimilarity], result of:
          0.036411904 = score(doc=2591,freq=18.0), product of:
            0.08415349 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.025786186 = queryNorm
            0.43268442 = fieldWeight in 2591, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=2591)
        0.036411904 = weight(_text_:web in 2591) [ClassicSimilarity], result of:
          0.036411904 = score(doc=2591,freq=18.0), product of:
            0.08415349 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.025786186 = queryNorm
            0.43268442 = fieldWeight in 2591, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=2591)
      0.10526316 = coord(2/19)
    
    Abstract
    This study examined the success and information seeking behaviors of seventh-grade science students and graduate students in information science in using Yahooligans! Web search engine/directory. It investigated these users' cognitive, affective, and physical behaviors as they sought the answer for a fact-finding task. It analyzed and compared the overall patterns of children's and graduate students' Web activities, including searching moves, browsing moves, backtracking moves, looping moves, screen scrolling, target location and deviation moves, and the time they took to complete the task. The authors applied Bilal's Web Traversal Measure to quantify these users' effectiveness, efficiency, and quality of moves they made. Results were based on 14 children's Web sessions and nine graduate students' sessions. Both groups' Web activities were captured online using Lotus ScreenCam, a software package that records and replays online activities in Web browsers. Children's affective states were captured via exit interviews. Graduate students' affective states were extracted from the journal writings they kept during the traversal process. The study findings reveal that 89% of the graduate students found the correct answer to the search task as opposed to 50% of the children. Based on the Measure, graduate students' weighted effectiveness, efficiency, and quality of the Web moves they made were much higher than those of the children. Regardless of success and weighted scores, however, similarities and differences in information seeking were found between the two groups. Yahooligans! poor structure of keyword searching was a major factor that contributed to the "breakdowns" children and graduate students experienced. Unlike children, graduate students were able to recover from "breakdowns" quickly and effectively. Three main factors influenced these users' performance: ability to recover from "breakdowns", navigational style, and focus on task. Children and graduate students made recommendations for improving Yahooligans! interface design. Implications for Web user training and system design improvements are made.
  19. Roy, R.S.; Agarwal, S.; Ganguly, N.; Choudhury, M.: Syntactic complexity of Web search queries through the lenses of language models, networks and users (2016) 0.01
    0.0071420614 = product of:
      0.067849584 = sum of:
        0.033924792 = weight(_text_:web in 3188) [ClassicSimilarity], result of:
          0.033924792 = score(doc=3188,freq=10.0), product of:
            0.08415349 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.025786186 = queryNorm
            0.40312994 = fieldWeight in 3188, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3188)
        0.033924792 = weight(_text_:web in 3188) [ClassicSimilarity], result of:
          0.033924792 = score(doc=3188,freq=10.0), product of:
            0.08415349 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.025786186 = queryNorm
            0.40312994 = fieldWeight in 3188, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3188)
      0.10526316 = coord(2/19)
    
    Abstract
    Across the world, millions of users interact with search engines every day to satisfy their information needs. As the Web grows bigger over time, such information needs, manifested through user search queries, also become more complex. However, there has been no systematic study that quantifies the structural complexity of Web search queries. In this research, we make an attempt towards understanding and characterizing the syntactic complexity of search queries using a multi-pronged approach. We use traditional statistical language modeling techniques to quantify and compare the perplexity of queries with natural language (NL). We then use complex network analysis for a comparative analysis of the topological properties of queries issued by real Web users and those generated by statistical models. Finally, we conduct experiments to study whether search engine users are able to identify real queries, when presented along with model-generated ones. The three complementary studies show that the syntactic structure of Web queries is more complex than what n-grams can capture, but simpler than NL. Queries, thus, seem to represent an intermediate stage between syntactic and non-syntactic communication.
  20. Khan, M.S.; Khor, S.: Enhanced Web document retrieval using automatic query expansion (2004) 0.01
    0.00663866 = product of:
      0.06306727 = sum of:
        0.031533636 = weight(_text_:web in 2091) [ClassicSimilarity], result of:
          0.031533636 = score(doc=2091,freq=6.0), product of:
            0.08415349 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.025786186 = queryNorm
            0.37471575 = fieldWeight in 2091, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2091)
        0.031533636 = weight(_text_:web in 2091) [ClassicSimilarity], result of:
          0.031533636 = score(doc=2091,freq=6.0), product of:
            0.08415349 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.025786186 = queryNorm
            0.37471575 = fieldWeight in 2091, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2091)
      0.10526316 = coord(2/19)
    
    Abstract
    The ever growing popularity of the Internet as a source of information, coupled with the accompanying growth in the number of documents made available through the World Wide Web, is leading to an increasing demand for more efficient and accurate information retrieval tools. Numerous techniques have been proposed and tried for improving the effectiveness of searching the World Wide Web for documents relevant to a given topic of interest. The specification of appropriate keywords and phrases by the user is crucial for the successful execution of a query as measured by the relevance of documents retrieved. Lack of users' knowledge an the search topic and their changing information needs often make it difficult for them to find suitable keywords or phrases for a query. This results in searches that fail to cover all likely aspects of the topic of interest. We describe a scheme that attempts to remedy this situation by automatically expanding the user query through the analysis of initially retrieved documents. Experimental results to demonstrate the effectiveness of the query expansion scheure are presented.

Years

Languages

  • e 60
  • d 23