Search (109 results, page 1 of 6)

  • × theme_ss:"Semantisches Umfeld in Indexierung u. Retrieval"
  1. Horch, A.; Kett, H.; Weisbecker, A.: Semantische Suchsysteme für das Internet : Architekturen und Komponenten semantischer Suchmaschinen (2013) 0.04
    0.0397546 = product of:
      0.1788957 = sum of:
        0.041067798 = weight(_text_:web in 4063) [ClassicSimilarity], result of:
          0.041067798 = score(doc=4063,freq=8.0), product of:
            0.113896765 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.034900077 = queryNorm
            0.36057037 = fieldWeight in 4063, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4063)
        0.1378279 = weight(_text_:suchmaschine in 4063) [ClassicSimilarity], result of:
          0.1378279 = score(doc=4063,freq=10.0), product of:
            0.19733392 = queryWeight, product of:
              5.6542544 = idf(docFreq=420, maxDocs=44218)
              0.034900077 = queryNorm
            0.69845015 = fieldWeight in 4063, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              5.6542544 = idf(docFreq=420, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4063)
      0.22222222 = coord(2/9)
    
    Abstract
    In der heutigen Zeit nimmt die Flut an Informationen exponentiell zu. In dieser »Informationsexplosion« entsteht täglich eine unüberschaubare Menge an neuen Informationen im Web: Beispielsweise 430 deutschsprachige Artikel bei Wikipedia, 2,4 Mio. Tweets bei Twitter und 12,2 Mio. Kommentare bei Facebook. Während in Deutschland vor einigen Jahren noch Google als nahezu einzige Suchmaschine beim Zugriff auf Informationen im Web genutzt wurde, nehmen heute die u.a. in Social Media veröffentlichten Meinungen und damit die Vorauswahl sowie Bewertung von Informationen einzelner Experten und Meinungsführer an Bedeutung zu. Aber wie können themenspezifische Informationen nun effizient für konkrete Fragestellungen identifiziert und bedarfsgerecht aufbereitet und visualisiert werden? Diese Studie gibt einen Überblick über semantische Standards und Formate, die Prozesse der semantischen Suche, Methoden und Techniken semantischer Suchsysteme, Komponenten zur Entwicklung semantischer Suchmaschinen sowie den Aufbau bestehender Anwendungen. Die Studie erläutert den prinzipiellen Aufbau semantischer Suchsysteme und stellt Methoden der semantischen Suche vor. Zudem werden Softwarewerkzeuge vorgestellt, mithilfe derer einzelne Funktionalitäten von semantischen Suchmaschinen realisiert werden können. Abschließend erfolgt die Betrachtung bestehender semantischer Suchmaschinen zur Veranschaulichung der Unterschiede der Systeme im Aufbau sowie in der Funktionalität.
    RSWK
    Suchmaschine / Semantic Web / Information Retrieval
    Suchmaschine / Information Retrieval / Ranking / Datenstruktur / Kontextbezogenes System
    Subject
    Suchmaschine / Semantic Web / Information Retrieval
    Suchmaschine / Information Retrieval / Ranking / Datenstruktur / Kontextbezogenes System
  2. Khan, M.S.; Khor, S.: Enhanced Web document retrieval using automatic query expansion (2004) 0.04
    0.03881802 = product of:
      0.11645406 = sum of:
        0.06423235 = weight(_text_:wide in 2091) [ClassicSimilarity], result of:
          0.06423235 = score(doc=2091,freq=4.0), product of:
            0.1546338 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.034900077 = queryNorm
            0.4153836 = fieldWeight in 2091, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=2091)
        0.04267891 = weight(_text_:web in 2091) [ClassicSimilarity], result of:
          0.04267891 = score(doc=2091,freq=6.0), product of:
            0.113896765 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.034900077 = queryNorm
            0.37471575 = fieldWeight in 2091, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2091)
        0.009542801 = product of:
          0.028628403 = sum of:
            0.028628403 = weight(_text_:29 in 2091) [ClassicSimilarity], result of:
              0.028628403 = score(doc=2091,freq=2.0), product of:
                0.12276756 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.034900077 = queryNorm
                0.23319192 = fieldWeight in 2091, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2091)
          0.33333334 = coord(1/3)
      0.33333334 = coord(3/9)
    
    Abstract
    The ever growing popularity of the Internet as a source of information, coupled with the accompanying growth in the number of documents made available through the World Wide Web, is leading to an increasing demand for more efficient and accurate information retrieval tools. Numerous techniques have been proposed and tried for improving the effectiveness of searching the World Wide Web for documents relevant to a given topic of interest. The specification of appropriate keywords and phrases by the user is crucial for the successful execution of a query as measured by the relevance of documents retrieved. Lack of users' knowledge an the search topic and their changing information needs often make it difficult for them to find suitable keywords or phrases for a query. This results in searches that fail to cover all likely aspects of the topic of interest. We describe a scheme that attempts to remedy this situation by automatically expanding the user query through the analysis of initially retrieved documents. Experimental results to demonstrate the effectiveness of the query expansion scheure are presented.
    Source
    Journal of the American Society for Information Science and technology. 55(2004) no.1, S.29-40
  3. Chang, C.-H.; Hsu, C.-C.: Integrating query expansion and conceptual relevance feedback for personalized Web information retrieval (1998) 0.04
    0.037938055 = product of:
      0.11381416 = sum of:
        0.052988984 = weight(_text_:wide in 1319) [ClassicSimilarity], result of:
          0.052988984 = score(doc=1319,freq=2.0), product of:
            0.1546338 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.034900077 = queryNorm
            0.342674 = fieldWeight in 1319, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1319)
        0.049792062 = weight(_text_:web in 1319) [ClassicSimilarity], result of:
          0.049792062 = score(doc=1319,freq=6.0), product of:
            0.113896765 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.034900077 = queryNorm
            0.43716836 = fieldWeight in 1319, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1319)
        0.011033117 = product of:
          0.03309935 = sum of:
            0.03309935 = weight(_text_:22 in 1319) [ClassicSimilarity], result of:
              0.03309935 = score(doc=1319,freq=2.0), product of:
                0.12221412 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034900077 = queryNorm
                0.2708308 = fieldWeight in 1319, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1319)
          0.33333334 = coord(1/3)
      0.33333334 = coord(3/9)
    
    Abstract
    Keyword based querying has been an immediate and efficient way to specify and retrieve related information that the user inquired. However, conventional document ranking based on an automatic assessment of document relevance to the query may not be the best approach when little information is given. Proposes an idea to integrate 2 existing techniques, query expansion and relevance feedback to achieve a concept-based information search for the Web
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue devoted to the Proceedings of the 7th International World Wide Web Conference, held 14-18 April 1998, Brisbane, Australia
  4. Ingwersen, P.; Järvelin, K.: ¬The turn : integration of information seeking and retrieval in context (2005) 0.03
    0.029855613 = product of:
      0.089566834 = sum of:
        0.018924637 = weight(_text_:wide in 1323) [ClassicSimilarity], result of:
          0.018924637 = score(doc=1323,freq=2.0), product of:
            0.1546338 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.034900077 = queryNorm
            0.122383565 = fieldWeight in 1323, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1323)
        0.010266949 = weight(_text_:web in 1323) [ClassicSimilarity], result of:
          0.010266949 = score(doc=1323,freq=2.0), product of:
            0.113896765 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.034900077 = queryNorm
            0.09014259 = fieldWeight in 1323, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1323)
        0.060375247 = weight(_text_:modell in 1323) [ClassicSimilarity], result of:
          0.060375247 = score(doc=1323,freq=6.0), product of:
            0.2098649 = queryWeight, product of:
              6.0133076 = idf(docFreq=293, maxDocs=44218)
              0.034900077 = queryNorm
            0.28768626 = fieldWeight in 1323, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.0133076 = idf(docFreq=293, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1323)
      0.33333334 = coord(3/9)
    
    Abstract
    The Turn analyzes the research of information seeking and retrieval (IS&R) and proposes a new direction of integrating research in these two areas: the fields should turn off their separate and narrow paths and construct a new avenue of research. An essential direction for this avenue is context as given in the subtitle Integration of Information Seeking and Retrieval in Context. Other essential themes in the book include: IS&R research models, frameworks and theories; search and works tasks and situations in context; interaction between humans and machines; information acquisition, relevance and information use; research design and methodology based on a structured set of explicit variables - all set into the holistic cognitive approach. The present monograph invites the reader into a construction project - there is much research to do for a contextual understanding of IS&R. The Turn represents a wide-ranging perspective of IS&R by providing a novel unique research framework, covering both individual and social aspects of information behavior, including the generation, searching, retrieval and use of information. Regarding traditional laboratory information retrieval research, the monograph proposes the extension of research toward actors, search and work tasks, IR interaction and utility of information. Regarding traditional information seeking research, it proposes the extension toward information access technology and work task contexts. The Turn is the first synthesis of research in the broad area of IS&R ranging from systems oriented laboratory IR research to social science oriented information seeking studies. TOC:Introduction.- The Cognitive Framework for Information.- The Development of Information Seeking Research.- Systems-Oriented Information Retrieval.- Cognitive and User-Oriented Information Retrieval.- The Integrated IS&R Research Framework.- Implications of the Cognitive Framework for IS&R.- Towards a Research Program.- Conclusion.- Definitions.- References.- Index.
    Footnote
    - Kapitel fünf enthält einen entsprechenden Überblick über die kognitive und benutzerorientierte IR-Tradition. Es zeigt, welche anderen (als nur die labororientierten) IR-Studien durchgeführt werden können, wobei sich die Betrachtung von frühen Modellen (z.B. Taylor) über Belkins ASK-Konzept bis zu Ingwersens Modell der Polyrepräsentation, und von Bates Berrypicking-Ansatz bis zu Vakkaris "taskbased" IR-Modell erstreckt. Auch Web-IR, OKAPI und Diskussionen zum Relevanzbegriff werden hier thematisiert. - Im folgenden Kapitel schlagen die Autoren ein integriertes IS&R Forschungsmodell vor, bei dem die vielfältigen Beziehungen zwischen Informationssuchenden, Systementwicklern, Oberflächen und anderen beteiligten Aspekten berücksichtigt werden. Ihr Ansatz vereint die traditionelle Laborforschung mit verschiedenen benutzerorientierten Traditionen aus IS&R, insbesondere mit den empirischen Ansätzen zu IS und zum interaktiven IR, in einem holistischen kognitiven Modell. - Kapitel sieben untersucht die Implikationen dieses Modells für IS&R, wobei besonders ins Auge fällt, wie komplex die Anfragen von Informationssuchenden im Vergleich mit der relativen Einfachheit der Algorithmen zum Auffinden relevanter Dokumente sind. Die Abbildung der vielfältig variierenden kognitiven Zustände der Anfragesteller im Rahmen der der Systementwicklung ist sicherlich keine triviale Aufgabe. Wie dabei das Problem der Einbeziehung des zentralen Aspektes der Bedeutung gelöst werden kann, sei dahingestellt. - Im achten Kapitel wird der Versuch unternommen, die zuvor diskutierten Punkte in ein IS&R-Forschungsprogramm (Prozesse - Verhalten - Systemfunktionalität - Performanz) umzusetzen, wobei auch einige kritische Anmerkungen zur bisherigen Forschungspraxis getroffen werden. - Das abschliessende neunte Kapitel fasst das Buch kurz zusammen und kann somit auch als Einstieg in dieThematik gelesen werden. Darauffolgen noch ein sehr nützliches Glossar zu allen wichtigen Begriffen, die in dem Buch Verwendung finden, eine Bibliographie und ein Sachregister. Ingwersen und Järvelin haben hier ein sehr anspruchsvolles und dennoch lesbares Buch vorgelegt. Die gebotenen Übersichtskapitel und Diskussionen sind zwar keine Einführung in die Informationswissenschaft, decken aber einen grossen Teil der heute in dieser Disziplin aktuellen und durch laufende Forschungsaktivitäten und Publikationen berührten Teilbereiche ab. Man könnte es auch - vielleicht ein wenig überspitzt - so formulieren: Was hier thematisiert wird, ist eigentlich die moderne Informationswissenschaft. Der Versuch, die beiden Forschungstraditionen zu vereinen, wird diesem Werk sicherlich einen Platz in der Geschichte der Disziplin sichern. Nicht ganz glücklich erscheint der Titel des Buches. "The Turn" soll eine Wende bedeuten, nämlich jene hin zu einer integrierten Sicht von IS und IR. Das geht vermutlich aus dem Untertitel besser hervor, doch dieser erschien den Autoren wohl zu trocken. Schade, denn "The Turn" gibt es z.B. in unserem Verbundkatalog bereits, allerdings mit dem Zusatz "from the Cold War to a new era; the United States and the Soviet Union 1983-1990". Der Verlag, der abgesehen davon ein gediegenes (wenn auch nicht gerade wohlfeiles) Produkt vorgelegt hat, hätte derlei unscharfe Duplizierend besser verhindert. Ungeachtet dessen empfehle ich dieses wichtige Buch ohne Vorbehalt zur Anschaffung; es sollte in keiner grösseren Bibliothek fehlen."
  5. Gábor, K.; Zargayouna, H.; Tellier, I.; Buscaldi, D.; Charnois, T.: ¬A typology of semantic relations dedicated to scientific literature analysis (2016) 0.02
    0.018163655 = product of:
      0.081736445 = sum of:
        0.052988984 = weight(_text_:wide in 2933) [ClassicSimilarity], result of:
          0.052988984 = score(doc=2933,freq=2.0), product of:
            0.1546338 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.034900077 = queryNorm
            0.342674 = fieldWeight in 2933, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2933)
        0.02874746 = weight(_text_:web in 2933) [ClassicSimilarity], result of:
          0.02874746 = score(doc=2933,freq=2.0), product of:
            0.113896765 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.034900077 = queryNorm
            0.25239927 = fieldWeight in 2933, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2933)
      0.22222222 = coord(2/9)
    
    Content
    Vortrag, "Semantics, Analytics, Visualisation: Enhancing Scholarly Data Workshop co-located with the 25th International World Wide Web Conference April 11, 2016 - Montreal, Canada", Montreal 2016.
  6. Faaborg, A.; Lagoze, C.: Semantic browsing (2003) 0.02
    0.01809994 = product of:
      0.081449725 = sum of:
        0.07041661 = weight(_text_:web in 1026) [ClassicSimilarity], result of:
          0.07041661 = score(doc=1026,freq=12.0), product of:
            0.113896765 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.034900077 = queryNorm
            0.6182494 = fieldWeight in 1026, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1026)
        0.011033117 = product of:
          0.03309935 = sum of:
            0.03309935 = weight(_text_:22 in 1026) [ClassicSimilarity], result of:
              0.03309935 = score(doc=1026,freq=2.0), product of:
                0.12221412 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034900077 = queryNorm
                0.2708308 = fieldWeight in 1026, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1026)
          0.33333334 = coord(1/3)
      0.22222222 = coord(2/9)
    
    Abstract
    We have created software applications that allow users to both author and use Semantic Web metadata. To create and use a layer of semantic content on top of the existing Web, we have (1) implemented a user interface that expedites the task of attributing metadata to resources on the Web, and (2) augmented a Web browser to leverage this semantic metadata to provide relevant information and tasks to the user. This project provides a framework for annotating and reorganizing existing files, pages, and sites on the Web that is similar to Vannevar Bushrsquos original concepts of trail blazing and associative indexing.
    Source
    Research and advanced technology for digital libraries : 7th European Conference, proceedings / ECDL 2003, Trondheim, Norway, August 17-22, 2003
    Theme
    Semantic Web
  7. Wolfram, D.; Xie, H.I.: Traditional IR for web users : a context for general audience digital libraries (2002) 0.02
    0.017537126 = product of:
      0.07891707 = sum of:
        0.037849274 = weight(_text_:wide in 2589) [ClassicSimilarity], result of:
          0.037849274 = score(doc=2589,freq=2.0), product of:
            0.1546338 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.034900077 = queryNorm
            0.24476713 = fieldWeight in 2589, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2589)
        0.041067798 = weight(_text_:web in 2589) [ClassicSimilarity], result of:
          0.041067798 = score(doc=2589,freq=8.0), product of:
            0.113896765 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.034900077 = queryNorm
            0.36057037 = fieldWeight in 2589, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2589)
      0.22222222 = coord(2/9)
    
    Abstract
    The emergence of general audience digital libraries (GADLs) defines a context that represents a hybrid of both "traditional" IR, using primarily bibliographic resources provided by database vendors, and "popular" IR, exemplified by public search systems available on the World Wide Web. Findings of a study investigating end-user searching and response to a GADL are reported. Data collected from a Web-based end-user survey and data logs of resource usage for a Web-based GADL were analyzed for user characteristics, patterns of access and use, and user feedback. Cross-tabulations using respondent demographics revealed several key differences in how the system was used and valued by users of different age groups. Older users valued the service more than younger users and engaged in different searching and viewing behaviors. The GADL more closely resembles traditional retrieval systems in terms of content and purpose of use, but is more similar to popular IR systems in terms of user behavior and accessibility. A model that defines the dual context of the GADL environment is derived from the data analysis and existing IR models in general and other specific contexts. The authors demonstrate the distinguishing characteristics of this IR context, and discuss implications for the development and evaluation of future GADLs to accommodate a variety of user needs and expectations.
  8. Gillitzer, B.: Yewno (2017) 0.02
    0.016897922 = product of:
      0.07604065 = sum of:
        0.06973601 = weight(_text_:suchmaschine in 3447) [ClassicSimilarity], result of:
          0.06973601 = score(doc=3447,freq=4.0), product of:
            0.19733392 = queryWeight, product of:
              5.6542544 = idf(docFreq=420, maxDocs=44218)
              0.034900077 = queryNorm
            0.3533909 = fieldWeight in 3447, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.6542544 = idf(docFreq=420, maxDocs=44218)
              0.03125 = fieldNorm(doc=3447)
        0.0063046385 = product of:
          0.018913915 = sum of:
            0.018913915 = weight(_text_:22 in 3447) [ClassicSimilarity], result of:
              0.018913915 = score(doc=3447,freq=2.0), product of:
                0.12221412 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034900077 = queryNorm
                0.15476047 = fieldWeight in 3447, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3447)
          0.33333334 = coord(1/3)
      0.22222222 = coord(2/9)
    
    Abstract
    "Die Bayerische Staatsbibliothek testet den semantischen "Discovery Service" Yewno als zusätzliche thematische Suchmaschine für digitale Volltexte. Der Service ist unter folgendem Link erreichbar: https://www.bsb-muenchen.de/recherche-und-service/suchen-und-finden/yewno/. Das Identifizieren von Themen, um die es in einem Text geht, basiert bei Yewno alleine auf Methoden der künstlichen Intelligenz und des maschinellen Lernens. Dabei werden sie nicht - wie bei klassischen Katalogsystemen - einem Text als Ganzem zugeordnet, sondern der jeweiligen Textstelle. Die Eingabe eines Suchwortes bzw. Themas, bei Yewno "Konzept" genannt, führt umgehend zu einer grafischen Darstellung eines semantischen Netzwerks relevanter Konzepte und ihrer inhaltlichen Zusammenhänge. So ist ein Navigieren über thematische Beziehungen bis hin zu den Fundstellen im Text möglich, die dann in sogenannten Snippets angezeigt werden. In der Test-Anwendung der Bayerischen Staatsbibliothek durchsucht Yewno aktuell 40 Millionen englischsprachige Dokumente aus Publikationen namhafter Wissenschaftsverlage wie Cambridge University Press, Oxford University Press, Wiley, Sage und Springer, sowie Dokumente, die im Open Access verfügbar sind. Nach der dreimonatigen Testphase werden zunächst die Rückmeldungen der Nutzer ausgewertet. Ob und wann dann der Schritt von der klassischen Suchmaschine zum semantischen "Discovery Service" kommt und welche Bedeutung Anwendungen wie Yewno in diesem Zusammenhang einnehmen werden, ist heute noch nicht abzusehen. Die Software Yewno wurde vom gleichnamigen Startup in Zusammenarbeit mit der Stanford University entwickelt, mit der auch die Bayerische Staatsbibliothek eng kooperiert. [Inetbib-Posting vom 22.02.2017].
    Date
    22. 2.2017 10:16:49
  9. Menczer, F.: Lexical and semantic clustering by Web links (2004) 0.02
    0.016607981 = product of:
      0.07473592 = sum of:
        0.06519312 = weight(_text_:web in 3090) [ClassicSimilarity], result of:
          0.06519312 = score(doc=3090,freq=14.0), product of:
            0.113896765 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.034900077 = queryNorm
            0.57238775 = fieldWeight in 3090, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3090)
        0.009542801 = product of:
          0.028628403 = sum of:
            0.028628403 = weight(_text_:29 in 3090) [ClassicSimilarity], result of:
              0.028628403 = score(doc=3090,freq=2.0), product of:
                0.12276756 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.034900077 = queryNorm
                0.23319192 = fieldWeight in 3090, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3090)
          0.33333334 = coord(1/3)
      0.22222222 = coord(2/9)
    
    Abstract
    Recent Web-searching and -mining tools are combining text and link analysis to improve ranking and crawling algorithms. The central assumption behind such approaches is that there is a correiation between the graph structure of the Web and the text and meaning of pages. Here I formalize and empirically evaluate two general conjectures drawing connections from link information to lexical and semantic Web content. The link-content conjecture states that a page is similar to the pages that link to it, and the link-cluster conjecture that pages about the same topic are clustered together. These conjectures are offen simply assumed to hold, and Web search tools are built an such assumptions. The present quantitative confirmation sheds light an the connection between the success of the latest Web-mining techniques and the small world topology of the Web, with encouraging implications for the design of better crawling algorithms.
    Date
    9. 1.2005 19:20:29
  10. Hannech, A.: Système de recherche d'information étendue basé sur une projection multi-espaces (2018) 0.02
    0.015941057 = product of:
      0.047823172 = sum of:
        0.021410782 = weight(_text_:wide in 4472) [ClassicSimilarity], result of:
          0.021410782 = score(doc=4472,freq=4.0), product of:
            0.1546338 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.034900077 = queryNorm
            0.1384612 = fieldWeight in 4472, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.015625 = fieldNorm(doc=4472)
        0.023231456 = weight(_text_:web in 4472) [ClassicSimilarity], result of:
          0.023231456 = score(doc=4472,freq=16.0), product of:
            0.113896765 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.034900077 = queryNorm
            0.2039694 = fieldWeight in 4472, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.015625 = fieldNorm(doc=4472)
        0.0031809339 = product of:
          0.009542801 = sum of:
            0.009542801 = weight(_text_:29 in 4472) [ClassicSimilarity], result of:
              0.009542801 = score(doc=4472,freq=2.0), product of:
                0.12276756 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.034900077 = queryNorm
                0.07773064 = fieldWeight in 4472, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.015625 = fieldNorm(doc=4472)
          0.33333334 = coord(1/3)
      0.33333334 = coord(3/9)
    
    Abstract
    Depuis son apparition au début des années 90, le World Wide Web (WWW ou Web) a offert un accès universel aux connaissances et le monde de l'information a été principalement témoin d'une grande révolution (la révolution numérique). Il est devenu rapidement très populaire, ce qui a fait de lui la plus grande et vaste base de données et de connaissances existantes grâce à la quantité et la diversité des données qu'il contient. Cependant, l'augmentation et l'évolution considérables de ces données soulèvent d'importants problèmes pour les utilisateurs notamment pour l'accès aux documents les plus pertinents à leurs requêtes de recherche. Afin de faire face à cette explosion exponentielle du volume de données et faciliter leur accès par les utilisateurs, différents modèles sont proposés par les systèmes de recherche d'information (SRIs) pour la représentation et la recherche des documents web. Les SRIs traditionnels utilisent, pour indexer et récupérer ces documents, des mots-clés simples qui ne sont pas sémantiquement liés. Cela engendre des limites en termes de la pertinence et de la facilité d'exploration des résultats. Pour surmonter ces limites, les techniques existantes enrichissent les documents en intégrant des mots-clés externes provenant de différentes sources. Cependant, ces systèmes souffrent encore de limitations qui sont liées aux techniques d'exploitation de ces sources d'enrichissement. Lorsque les différentes sources sont utilisées de telle sorte qu'elles ne peuvent être distinguées par le système, cela limite la flexibilité des modèles d'exploration qui peuvent être appliqués aux résultats de recherche retournés par ce système. Les utilisateurs se sentent alors perdus devant ces résultats, et se retrouvent dans l'obligation de les filtrer manuellement pour sélectionner l'information pertinente. S'ils veulent aller plus loin, ils doivent reformuler et cibler encore plus leurs requêtes de recherche jusqu'à parvenir aux documents qui répondent le mieux à leurs attentes. De cette façon, même si les systèmes parviennent à retrouver davantage des résultats pertinents, leur présentation reste problématique. Afin de cibler la recherche à des besoins d'information plus spécifiques de l'utilisateur et améliorer la pertinence et l'exploration de ses résultats de recherche, les SRIs avancés adoptent différentes techniques de personnalisation de données qui supposent que la recherche actuelle d'un utilisateur est directement liée à son profil et/ou à ses expériences de navigation/recherche antérieures. Cependant, cette hypothèse ne tient pas dans tous les cas, les besoins de l'utilisateur évoluent au fil du temps et peuvent s'éloigner de ses intérêts antérieurs stockés dans son profil.
    Dans d'autres cas, le profil de l'utilisateur peut être mal exploité pour extraire ou inférer ses nouveaux besoins en information. Ce problème est beaucoup plus accentué avec les requêtes ambigües. Lorsque plusieurs centres d'intérêt auxquels est liée une requête ambiguë sont identifiés dans le profil de l'utilisateur, le système se voit incapable de sélectionner les données pertinentes depuis ce profil pour répondre à la requête. Ceci a un impact direct sur la qualité des résultats fournis à cet utilisateur. Afin de remédier à quelques-unes de ces limitations, nous nous sommes intéressés dans ce cadre de cette thèse de recherche au développement de techniques destinées principalement à l'amélioration de la pertinence des résultats des SRIs actuels et à faciliter l'exploration de grandes collections de documents. Pour ce faire, nous proposons une solution basée sur un nouveau concept d'indexation et de recherche d'information appelé la projection multi-espaces. Cette proposition repose sur l'exploitation de différentes catégories d'information sémantiques et sociales qui permettent d'enrichir l'univers de représentation des documents et des requêtes de recherche en plusieurs dimensions d'interprétations. L'originalité de cette représentation est de pouvoir distinguer entre les différentes interprétations utilisées pour la description et la recherche des documents. Ceci donne une meilleure visibilité sur les résultats retournés et aide à apporter une meilleure flexibilité de recherche et d'exploration, en donnant à l'utilisateur la possibilité de naviguer une ou plusieurs vues de données qui l'intéressent le plus. En outre, les univers multidimensionnels de représentation proposés pour la description des documents et l'interprétation des requêtes de recherche aident à améliorer la pertinence des résultats de l'utilisateur en offrant une diversité de recherche/exploration qui aide à répondre à ses différents besoins et à ceux des autres différents utilisateurs. Cette étude exploite différents aspects liés à la recherche personnalisée et vise à résoudre les problèmes engendrés par l'évolution des besoins en information de l'utilisateur. Ainsi, lorsque le profil de cet utilisateur est utilisé par notre système, une technique est proposée et employée pour identifier les intérêts les plus représentatifs de ses besoins actuels dans son profil. Cette technique se base sur la combinaison de trois facteurs influents, notamment le facteur contextuel, fréquentiel et temporel des données. La capacité des utilisateurs à interagir, à échanger des idées et d'opinions, et à former des réseaux sociaux sur le Web, a amené les systèmes à s'intéresser aux types d'interactions de ces utilisateurs, au niveau d'interaction entre eux ainsi qu'à leurs rôles sociaux dans le système. Ces informations sociales sont abordées et intégrées dans ce travail de recherche. L'impact et la manière de leur intégration dans le processus de RI sont étudiés pour améliorer la pertinence des résultats.
    Since its appearance in the early 90's, the World Wide Web (WWW or Web) has provided universal access to knowledge and the world of information has been primarily witness to a great revolution (the digital revolution). It quickly became very popular, making it the largest and most comprehensive database and knowledge base thanks to the amount and diversity of data it contains. However, the considerable increase and evolution of these data raises important problems for users, in particular for accessing the documents most relevant to their search queries. In order to cope with this exponential explosion of data volume and facilitate their access by users, various models are offered by information retrieval systems (IRS) for the representation and retrieval of web documents. Traditional SRIs use simple keywords that are not semantically linked to index and retrieve these documents. This creates limitations in terms of the relevance and ease of exploration of results. To overcome these limitations, existing techniques enrich documents by integrating external keywords from different sources. However, these systems still suffer from limitations that are related to the exploitation techniques of these sources of enrichment. When the different sources are used so that they cannot be distinguished by the system, this limits the flexibility of the exploration models that can be applied to the results returned by this system. Users then feel lost to these results, and find themselves forced to filter them manually to select the relevant information. If they want to go further, they must reformulate and target their search queries even more until they reach the documents that best meet their expectations. In this way, even if the systems manage to find more relevant results, their presentation remains problematic. In order to target research to more user-specific information needs and improve the relevance and exploration of its research findings, advanced SRIs adopt different data personalization techniques that assume that current research of user is directly related to his profile and / or previous browsing / search experiences.
    However, this assumption does not hold in all cases, the needs of the user evolve over time and can move away from his previous interests stored in his profile. In other cases, the user's profile may be misused to extract or infer new information needs. This problem is much more accentuated with ambiguous queries. When multiple POIs linked to a search query are identified in the user's profile, the system is unable to select the relevant data from that profile to respond to that request. This has a direct impact on the quality of the results provided to this user. In order to overcome some of these limitations, in this research thesis, we have been interested in the development of techniques aimed mainly at improving the relevance of the results of current SRIs and facilitating the exploration of major collections of documents. To do this, we propose a solution based on a new concept and model of indexing and information retrieval called multi-spaces projection. This proposal is based on the exploitation of different categories of semantic and social information that enrich the universe of document representation and search queries in several dimensions of interpretations. The originality of this representation is to be able to distinguish between the different interpretations used for the description and the search for documents. This gives a better visibility on the results returned and helps to provide a greater flexibility of search and exploration, giving the user the ability to navigate one or more views of data that interest him the most. In addition, the proposed multidimensional representation universes for document description and search query interpretation help to improve the relevance of the user's results by providing a diversity of research / exploration that helps meet his diverse needs and those of other different users. This study exploits different aspects that are related to the personalized search and aims to solve the problems caused by the evolution of the information needs of the user. Thus, when the profile of this user is used by our system, a technique is proposed and used to identify the interests most representative of his current needs in his profile. This technique is based on the combination of three influential factors, including the contextual, frequency and temporal factor of the data. The ability of users to interact, exchange ideas and opinions, and form social networks on the Web, has led systems to focus on the types of interactions these users have at the level of interaction between them as well as their social roles in the system. This social information is discussed and integrated into this research work. The impact and how they are integrated into the IR process are studied to improve the relevance of the results.
    Date
    29. 9.2018 18:57:38
  11. Melucci, M.: Contextual search : a computational framework (2012) 0.01
    0.012974039 = product of:
      0.058383174 = sum of:
        0.037849274 = weight(_text_:wide in 4913) [ClassicSimilarity], result of:
          0.037849274 = score(doc=4913,freq=2.0), product of:
            0.1546338 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.034900077 = queryNorm
            0.24476713 = fieldWeight in 4913, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4913)
        0.020533899 = weight(_text_:web in 4913) [ClassicSimilarity], result of:
          0.020533899 = score(doc=4913,freq=2.0), product of:
            0.113896765 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.034900077 = queryNorm
            0.18028519 = fieldWeight in 4913, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4913)
      0.22222222 = coord(2/9)
    
    Abstract
    The growing availability of data in electronic form, the expansion of the World Wide Web and the accessibility of computational methods for large-scale data processing have allowed researchers in Information Retrieval (IR) to design systems which can effectively and efficiently constrain search within the boundaries given by context, thus transforming classical search into contextual search. Contextual Search: A Computational Framework introduces contextual search within a computational framework based on contextual variables, contextual factors and statistical models. It describes how statistical models can process contextual variables to infer the contextual factors underlying the current search context. It also provides background to the subject by: placing it among other surveys on relevance, interaction, context, and behaviour; providing a description of the contextual variables used for implementing the statistical models which represent and predict relevance and contextual factors; and providing an overview of the evaluation methodologies and findings relevant to this subject. Contextual Search: A Computational Framework is a highly recommended read, both for beginners who are embarking on research in this area and as a useful reference for established IR researchers.
  12. Symonds, M.; Bruza, P.; Zuccon, G.; Koopman, B.; Sitbon, L.; Turner, I.: Automatic query expansion : a structural linguistic perspective (2014) 0.01
    0.012974039 = product of:
      0.058383174 = sum of:
        0.037849274 = weight(_text_:wide in 1338) [ClassicSimilarity], result of:
          0.037849274 = score(doc=1338,freq=2.0), product of:
            0.1546338 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.034900077 = queryNorm
            0.24476713 = fieldWeight in 1338, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1338)
        0.020533899 = weight(_text_:web in 1338) [ClassicSimilarity], result of:
          0.020533899 = score(doc=1338,freq=2.0), product of:
            0.113896765 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.034900077 = queryNorm
            0.18028519 = fieldWeight in 1338, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1338)
      0.22222222 = coord(2/9)
    
    Abstract
    A user's query is considered to be an imprecise description of their information need. Automatic query expansion is the process of reformulating the original query with the goal of improving retrieval effectiveness. Many successful query expansion techniques model syntagmatic associations that infer two terms co-occur more often than by chance in natural language. However, structural linguistics relies on both syntagmatic and paradigmatic associations to deduce the meaning of a word. Given the success of dependency-based approaches to query expansion and the reliance on word meanings in the query formulation process, we argue that modeling both syntagmatic and paradigmatic information in the query expansion process improves retrieval effectiveness. This article develops and evaluates a new query expansion technique that is based on a formal, corpus-based model of word meaning that models syntagmatic and paradigmatic associations. We demonstrate that when sufficient statistical information exists, as in the case of longer queries, including paradigmatic information alone provides significant improvements in retrieval effectiveness across a wide variety of data sets. More generally, when our new query expansion approach is applied to large-scale web retrieval it demonstrates significant improvements in retrieval effectiveness over a strong baseline system, based on a commercial search engine.
  13. Brunetti, J.M.; Roberto García, R.: User-centered design and evaluation of overview components for semantic data exploration (2014) 0.01
    0.011726123 = product of:
      0.052767552 = sum of:
        0.046462912 = weight(_text_:web in 1626) [ClassicSimilarity], result of:
          0.046462912 = score(doc=1626,freq=16.0), product of:
            0.113896765 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.034900077 = queryNorm
            0.4079388 = fieldWeight in 1626, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=1626)
        0.0063046385 = product of:
          0.018913915 = sum of:
            0.018913915 = weight(_text_:22 in 1626) [ClassicSimilarity], result of:
              0.018913915 = score(doc=1626,freq=2.0), product of:
                0.12221412 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034900077 = queryNorm
                0.15476047 = fieldWeight in 1626, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1626)
          0.33333334 = coord(1/3)
      0.22222222 = coord(2/9)
    
    Abstract
    Purpose - The growing volumes of semantic data available in the web result in the need for handling the information overload phenomenon. The potential of this amount of data is enormous but in most cases it is very difficult for users to visualize, explore and use this data, especially for lay-users without experience with Semantic Web technologies. The paper aims to discuss these issues. Design/methodology/approach - The Visual Information-Seeking Mantra "Overview first, zoom and filter, then details-on-demand" proposed by Shneiderman describes how data should be presented in different stages to achieve an effective exploration. The overview is the first user task when dealing with a data set. The objective is that the user is capable of getting an idea about the overall structure of the data set. Different information architecture (IA) components supporting the overview tasks have been developed, so they are automatically generated from semantic data, and evaluated with end-users. Findings - The chosen IA components are well known to web users, as they are present in most web pages: navigation bars, site maps and site indexes. The authors complement them with Treemaps, a visualization technique for displaying hierarchical data. These components have been developed following an iterative User-Centered Design methodology. Evaluations with end-users have shown that they get easily used to them despite the fact that they are generated automatically from structured data, without requiring knowledge about the underlying semantic technologies, and that the different overview components complement each other as they focus on different information search needs. Originality/value - Obtaining semantic data sets overviews cannot be easily done with the current semantic web browsers. Overviews become difficult to achieve with large heterogeneous data sets, which is typical in the Semantic Web, because traditional IA techniques do not easily scale to large data sets. There is little or no support to obtain overview information quickly and easily at the beginning of the exploration of a new data set. This can be a serious limitation when exploring a data set for the first time, especially for lay-users. The proposal is to reuse and adapt existing IA components to provide this overview to users and show that they can be generated automatically from the thesaurus and ontologies that structure semantic data while providing a comparable user experience to traditional web sites.
    Date
    20. 1.2015 18:30:22
    Theme
    Semantic Web
  14. Context: nature, impact, and role : 5th International Conference on Conceptions of Library and Information Science, CoLIS 2005, Glasgow 2005; Proceedings (2005) 0.01
    0.010075314 = product of:
      0.045338914 = sum of:
        0.01451966 = weight(_text_:web in 42) [ClassicSimilarity], result of:
          0.01451966 = score(doc=42,freq=4.0), product of:
            0.113896765 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.034900077 = queryNorm
            0.12748088 = fieldWeight in 42, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=42)
        0.030819254 = weight(_text_:suchmaschine in 42) [ClassicSimilarity], result of:
          0.030819254 = score(doc=42,freq=2.0), product of:
            0.19733392 = queryWeight, product of:
              5.6542544 = idf(docFreq=420, maxDocs=44218)
              0.034900077 = queryNorm
            0.15617819 = fieldWeight in 42, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6542544 = idf(docFreq=420, maxDocs=44218)
              0.01953125 = fieldNorm(doc=42)
      0.22222222 = coord(2/9)
    
    Footnote
    Mehrere Beiträge befassen sich mit dem Problem der Relevanz. Erica Cosijn und Theo Bothma (Pretoria) argumentieren, dass für das Benutzerverhalten neben der thematischen Relevanz auch verschiedene andere Relevanzdimensionen eine Rolle spielen und schlagen auf der Basis eines (abermals auf Ingwersen zurückgehenden) erweiterten Relevanzmodells vor, dass IR-Systeme die Möglichkeit zur Abgabe auch kognitiver, situativer und sozio-kognitiver Relevanzurteile bieten sollten. Elaine Toms et al. (Kanada) berichten von einer Studie, in der versucht wurde, die schon vor 30 Jahren von Tefko Saracevic3 erstellten fünf Relevanzdimensionen (kognitiv, motivational, situativ, thematisch und algorithmisch) zu operationalisieren und anhand von Recherchen mit einer Web-Suchmaschine zu untersuchen. Die Ergebnisse zeigten, dass sich diese fünf Dimensionen in drei Typen vereinen lassen, die Benutzer, System und Aufgabe repräsentieren. Von einer völlig anderen Seite nähern sich Olof Sundin und Jenny Johannison (Boras, Schweden) der Relevanzthematik, indem sie einen kommunikationsorientierten, neo-pragmatistischen Ansatz (nach Richard Rorty) wählen, um Informationssuche und Relevanz zu analysieren, und dabei auch auf das Werk von Michel Foucault zurückgreifen. Weitere interessante Artikel befassen sich mit Bradford's Law of Scattering (Hjørland & Nicolaisen), Information Sharing and Timing (Widén-Wulff & Davenport), Annotations as Context for Searching Documents (Agosti & Ferro), sowie dem Nutzen von neuen Informationsquellen wie Web Links, Newsgroups und Blogs für die sozial- und informationswissenschaftliche Forschung (Thelwall & Wouters). In Summe liegt hier ein interessantes und anspruchsvolles Buch vor - inhaltlich natürlich nicht gerade einheitlich und geschlossen, doch dies darf man bei einem Konferenzband ohnedies nicht erwarten. Manche der abgedruckten Beiträge sind sicher nicht einfach zu lesen, lohnen aber die Mühe. Auch für Praktiker aus Bibliothek und Information ist einiges dabei, sofern sie sich für die wissenschaftliche Basis ihrer Tätigkeit interessieren. Fachlich einschlägige Spezial- und grössere Allgemeinbibliotheken sollten das Werk daher unbedingt führen.
  15. Vallet, D.; Fernández, M.; Castells, P.: ¬An ontology-based information retrieval model (2005) 0.01
    0.00986444 = product of:
      0.04438998 = sum of:
        0.03484718 = weight(_text_:web in 4708) [ClassicSimilarity], result of:
          0.03484718 = score(doc=4708,freq=4.0), product of:
            0.113896765 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.034900077 = queryNorm
            0.3059541 = fieldWeight in 4708, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4708)
        0.009542801 = product of:
          0.028628403 = sum of:
            0.028628403 = weight(_text_:29 in 4708) [ClassicSimilarity], result of:
              0.028628403 = score(doc=4708,freq=2.0), product of:
                0.12276756 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.034900077 = queryNorm
                0.23319192 = fieldWeight in 4708, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4708)
          0.33333334 = coord(1/3)
      0.22222222 = coord(2/9)
    
    Abstract
    Semantic search has been one of the motivations of the Semantic Web since it was envisioned. We propose a model for the exploitation of ontologybased KBs to improve search over large document repositories. Our approach includes an ontology-based scheme for the semi-automatic annotation of documents, and a retrieval system. The retrieval model is based on an adaptation of the classic vector-space model, including an annotation weighting algorithm, and a ranking algorithm. Semantic search is combined with keyword-based search to achieve tolerance to KB incompleteness. Our proposal is illustrated with sample experiments showing improvements with respect to keyword-based search, and providing ground for further research and discussion.
    Source
    The Semantic Web: research and applications ; second European Semantic WebConference, ESWC 2005, Heraklion, Crete, Greece, May 29 - June 1, 2005 ; proceedings. Eds.: A. Gómez-Pérez u. J. Euzenat
  16. Baofu, P.: ¬The future of information architecture : conceiving a better way to understand taxonomy, network, and intelligence (2008) 0.01
    0.009685557 = product of:
      0.08717001 = sum of:
        0.08717001 = weight(_text_:suchmaschine in 2257) [ClassicSimilarity], result of:
          0.08717001 = score(doc=2257,freq=4.0), product of:
            0.19733392 = queryWeight, product of:
              5.6542544 = idf(docFreq=420, maxDocs=44218)
              0.034900077 = queryNorm
            0.44173864 = fieldWeight in 2257, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.6542544 = idf(docFreq=420, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2257)
      0.11111111 = coord(1/9)
    
    RSWK
    Suchmaschine / Information Retrieval
    Subject
    Suchmaschine / Information Retrieval
  17. Mandalka, M.: Open semantic search zum unabhängigen und datenschutzfreundlichen Erschliessen von Dokumenten (2015) 0.01
    0.009188526 = product of:
      0.082696736 = sum of:
        0.082696736 = weight(_text_:suchmaschine in 2133) [ClassicSimilarity], result of:
          0.082696736 = score(doc=2133,freq=10.0), product of:
            0.19733392 = queryWeight, product of:
              5.6542544 = idf(docFreq=420, maxDocs=44218)
              0.034900077 = queryNorm
            0.41907007 = fieldWeight in 2133, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              5.6542544 = idf(docFreq=420, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2133)
      0.11111111 = coord(1/9)
    
    Abstract
    Ob grösserer Leak oder Zusammenwürfeln oder (wieder) Erschliessen umfangreicherer (kollaborativer) Recherche(n) oder Archive: Immer öfter müssen im Journalismus größere Datenberge und Dokumentenberge erschlossen werden. In eine Suchmaschine integrierte Analyse-Tools helfen (halb)automatisch.
    Content
    "Open Semantic Desktop Search Zur Tagung des Netzwerk Recherche ist die Desktop Suchmaschine Open Semantic Desktop Search zum unabhängigen und datenschutzfreundlichen Erschliessen und Analysieren von Dokumentenbergen nun erstmals auch als deutschsprachige Version verfügbar. Dank mächtiger Open Source Basis kann die auf Debian GNU/Linux und Apache Solr basierende freie Software als unter Linux, Windows oder Mac lauffähige virtuelle Maschine kostenlos heruntergeladen, genutzt, weitergegeben und weiterentwickelt werden. Dokumentenberge erschliessen Ob grösserer Leak oder Zusammenwürfeln oder (wieder) Erschliessen umfangreicherer (kollaborativer) Recherche(n) oder Archive: Hin und wieder müssen größere Datenberge bzw. Dokumentenberge erschlossen werden, die so viele Dokumente enthalten, dass Mensch diese Masse an Dokumenten nicht mehr alle nacheinander durchschauen und einordnen kann. Auch bei kontinuierlicher Recherche zu Fachthemen sammeln sich mit der Zeit größere Mengen digitalisierter oder digitaler Dokumente zu grösseren Datenbergen an, die immer weiter wachsen und deren Informationen mit einer Suchmaschine für das Archiv leichter auffindbar bleiben. Moderne Tools zur Datenanalyse in Verbindung mit Enterprise Search Suchlösungen und darauf aufbauender Recherche-Tools helfen (halb)automatisch.
    Virtuelle Maschine für mehr Plattformunabhängigkeit Die nun auch deutschsprachig verfügbare und mit deutschen Daten wie Ortsnamen oder Bundestagsabgeordneten vorkonfigurierte virtuelle Maschine Open Semantic Desktop Search ermöglicht nun auch auf einzelnen Desktop Computern oder Notebooks mit Windows oder iOS (Mac) die Suche und Analyse von Dokumenten mit der Suchmaschine Open Semantic Search. Als virtuelle Maschine (VM) lässt sich die Suchmaschine Open Semantic Search nicht nur für besonders sensible Dokumente mit dem verschlüsselten Live-System InvestigateIX als abgeschottetes System auf verschlüsselten externen Datenträgern installieren, sondern als virtuelle Maschine für den Desktop auch einfach unter Windows oder auf einem Mac in eine bzgl. weiterer Software und Daten bereits existierende Systemumgebung integrieren, ohne hierzu auf einen (für gemeinsame Recherchen im Team oder für die Redaktion auch möglichen) Suchmaschinen Server angewiesen zu sein. Datenschutz & Unabhängigkeit: Grössere Unabhängigkeit von zentralen IT-Infrastrukturen für unabhängigen investigativen Datenjournalismus Damit ist investigative Recherche weitmöglichst unabhängig möglich: ohne teure, zentrale und von Administratoren abhängige Server, ohne von der Dokumentenanzahl abhängige teure Software-Lizenzen, ohne Internet und ohne spionierende Cloud-Dienste. Datenanalyse und Suche finden auf dem eigenen Computer statt, nicht wie bei vielen anderen Lösungen in der sogenannten Cloud."
  18. Atanassova, I.; Bertin, M.: Semantic facets for scientific information retrieval (2014) 0.01
    0.008862384 = product of:
      0.039880726 = sum of:
        0.02874746 = weight(_text_:web in 4471) [ClassicSimilarity], result of:
          0.02874746 = score(doc=4471,freq=2.0), product of:
            0.113896765 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.034900077 = queryNorm
            0.25239927 = fieldWeight in 4471, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4471)
        0.011133268 = product of:
          0.0333998 = sum of:
            0.0333998 = weight(_text_:29 in 4471) [ClassicSimilarity], result of:
              0.0333998 = score(doc=4471,freq=2.0), product of:
                0.12276756 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.034900077 = queryNorm
                0.27205724 = fieldWeight in 4471, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4471)
          0.33333334 = coord(1/3)
      0.22222222 = coord(2/9)
    
    Source
    Semantic Web Evaluation Challenge. SemWebEval 2014 at ESWC 2014, Anissaras, Crete, Greece, May 25-29, 2014, Revised Selected Papers. Eds.: V. Presutti et al
  19. Kasprzik, A.; Kett, J.: Vorschläge für eine Weiterentwicklung der Sacherschließung und Schritte zur fortgesetzten strukturellen Aufwertung der GND (2018) 0.01
    0.008220368 = product of:
      0.036991656 = sum of:
        0.02903932 = weight(_text_:web in 4599) [ClassicSimilarity], result of:
          0.02903932 = score(doc=4599,freq=4.0), product of:
            0.113896765 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.034900077 = queryNorm
            0.25496176 = fieldWeight in 4599, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4599)
        0.007952334 = product of:
          0.023857003 = sum of:
            0.023857003 = weight(_text_:29 in 4599) [ClassicSimilarity], result of:
              0.023857003 = score(doc=4599,freq=2.0), product of:
                0.12276756 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.034900077 = queryNorm
                0.19432661 = fieldWeight in 4599, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4599)
          0.33333334 = coord(1/3)
      0.22222222 = coord(2/9)
    
    Abstract
    Aufgrund der fortgesetzten Publikationsflut stellt sich immer dringender die Frage, wie die Schwellen für die Titel- und Normdatenpflege gesenkt werden können - sowohl für die intellektuelle als auch die automatisierte Sacherschließung. Zu einer Verbesserung der Daten- und Arbeitsqualität in der Sacherschließung kann beigetragen werden a) durch eine flexible Visualisierung der Gemeinsamen Normdatei (GND) und anderer Wissensorganisationssysteme, so dass deren Graphstruktur intuitiv erfassbar wird, und b) durch eine investigative Analyse ihrer aktuellen Struktur und die Entwicklung angepasster automatisierter Methoden zur Ermittlung und Korrektur fehlerhafter Muster. Die Deutsche Nationalbibliothek (DNB) prüft im Rahmen des GND-Entwicklungsprogramms 2017-2021, welche Bedingungen für eine fruchtbare community-getriebene Open-Source-Entwicklung entsprechender Werkzeuge gegeben sein müssen. Weiteres Potential steckt in einem langfristigen Übergang zu einer Darstellung von Titel- und Normdaten in Beschreibungssprachen im Sinne des Semantic Web (RDF; OWL, SKOS). So profitiert die GND von der Interoperabilität mit anderen kontrollierten Vokabularen und von einer erleichterten Interaktion mit anderen Fach-Communities und kann umgekehrt auch außerhalb des Bibliothekswesens zu einem noch attraktiveren Wissensorganisationssystem werden. Darüber hinaus bieten die Ansätze aus dem Semantic Web die Möglichkeit, stärker formalisierte, strukturierende Satellitenvokabulare rund um die GND zu entwickeln. Daraus ergeben sich nicht zuletzt auch neue Perspektiven für die automatisierte Sacherschließung. Es wäre lohnend, näher auszuloten, wie und inwieweit semantisch-logische Verfahren den bestehenden Methodenmix bereichern können.
    Date
    13.12.2018 13:29:07
  20. Brandão, W.C.; Santos, R.L.T.; Ziviani, N.; Moura, E.S. de; Silva, A.S. da: Learning to expand queries using entities (2014) 0.01
    0.00820447 = product of:
      0.036920115 = sum of:
        0.02903932 = weight(_text_:web in 1343) [ClassicSimilarity], result of:
          0.02903932 = score(doc=1343,freq=4.0), product of:
            0.113896765 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.034900077 = queryNorm
            0.25496176 = fieldWeight in 1343, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1343)
        0.007880798 = product of:
          0.023642393 = sum of:
            0.023642393 = weight(_text_:22 in 1343) [ClassicSimilarity], result of:
              0.023642393 = score(doc=1343,freq=2.0), product of:
                0.12221412 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034900077 = queryNorm
                0.19345059 = fieldWeight in 1343, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1343)
          0.33333334 = coord(1/3)
      0.22222222 = coord(2/9)
    
    Abstract
    A substantial fraction of web search queries contain references to entities, such as persons, organizations, and locations. Recently, methods that exploit named entities have been shown to be more effective for query expansion than traditional pseudorelevance feedback methods. In this article, we introduce a supervised learning approach that exploits named entities for query expansion using Wikipedia as a repository of high-quality feedback documents. In contrast with existing entity-oriented pseudorelevance feedback approaches, we tackle query expansion as a learning-to-rank problem. As a result, not only do we select effective expansion terms but we also weigh these terms according to their predicted effectiveness. To this end, we exploit the rich structure of Wikipedia articles to devise discriminative term features, including each candidate term's proximity to the original query terms, as well as its frequency across multiple article fields and in category and infobox descriptors. Experiments on three Text REtrieval Conference web test collections attest the effectiveness of our approach, with gains of up to 23.32% in terms of mean average precision, 19.49% in terms of precision at 10, and 7.86% in terms of normalized discounted cumulative gain compared with a state-of-the-art approach for entity-oriented query expansion.
    Date
    22. 8.2014 17:07:50

Years

Languages

  • e 86
  • d 22
  • f 1
  • More… Less…

Types

  • a 91
  • el 14
  • m 11
  • r 2
  • s 1
  • x 1
  • More… Less…