Search (413 results, page 3 of 21)

  • × theme_ss:"Retrievalstudien"
  1. Drabenstott, K.M.; Weller, M.S.: Improving personal-name searching in online catalogs (1996) 0.03
    0.029393207 = product of:
      0.08817962 = sum of:
        0.08817962 = product of:
          0.13226943 = sum of:
            0.0961623 = weight(_text_:online in 6742) [ClassicSimilarity], result of:
              0.0961623 = score(doc=6742,freq=14.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.62100726 = fieldWeight in 6742, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6742)
            0.03610713 = weight(_text_:retrieval in 6742) [ClassicSimilarity], result of:
              0.03610713 = score(doc=6742,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23394634 = fieldWeight in 6742, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6742)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Reports results of a study to examine the performance of online catalogue searches involving personal names and to recommend improvements to the basic system approach to soliciting user queries and searching for them. The research questions addressed in the study wre: how online systems can chose searching approaches on their own that are likely to produce useful retrieval; how online systems solicit queries from users; and how users respond to an experimental online catalogue that prompts them for the different elements of their personal name queries. Improvements include: the implementation of a new design for online catalogue searching that features search trees; new methods for soliciting user queries bearing personal names; and enlisting the participation of online catalogue users in the evaluation of system prompts, instructions, and messages that request input from them
  2. Information retrieval experiment (1981) 0.03
    0.029305872 = product of:
      0.08791761 = sum of:
        0.08791761 = product of:
          0.13187641 = sum of:
            0.03634593 = weight(_text_:online in 2653) [ClassicSimilarity], result of:
              0.03634593 = score(doc=2653,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23471867 = fieldWeight in 2653, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2653)
            0.09553049 = weight(_text_:retrieval in 2653) [ClassicSimilarity], result of:
              0.09553049 = score(doc=2653,freq=14.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.61896384 = fieldWeight in 2653, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2653)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Content
    Enthält die Beiträge: ROBERTSON, S.E.: The methodology of information retrieval experiment; RIJSBERGEN, C.J. van: Retrieval effectiveness; BELKIN, N.: Ineffable concepts in information retrieval; TAGUE, J.M.: The pragmatics of information retrieval experimentation; LANCASTER, F.W.: Evaluation within the environment of an operating information service; BARRACLOUGH, E.D.: Opportunities for testing with online systems; KEEN, M.E.: Laboratory tests of manual systems; ODDY, R.N.: Laboratory tests: automatic systems; HEINE, M.D.: Simulation, and simulation experiments; COOPER, W.S.: Gedanken experimentation: an alternative to traditional system testing?; SPARCK JONES, K.: Actual tests - retrieval system tests; EVANS, L.: An experiment: search strategy variation in SDI profiles; SALTON, G.: The Smart environment for retrieval system evaluation - advantage and problem areas
  3. Binder, G.; Stahl, M.; Faulborn, L.: Vergleichsuntersuchung MESSENGER-FULCRUM (2000) 0.03
    0.02904519 = product of:
      0.043567784 = sum of:
        0.031532075 = weight(_text_:im in 4885) [ClassicSimilarity], result of:
          0.031532075 = score(doc=4885,freq=2.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.2186231 = fieldWeight in 4885, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4885)
        0.012035711 = product of:
          0.03610713 = sum of:
            0.03610713 = weight(_text_:retrieval in 4885) [ClassicSimilarity], result of:
              0.03610713 = score(doc=4885,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23394634 = fieldWeight in 4885, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4885)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    In einem Benutzertest, der im Rahmen der Projektes GIRT stattfand, wurde die Leistungsfähigkeit zweier Retrievalsprachen für die Datenbankrecherche überprüft. Die Ergebnisse werden in diesem Bericht dargestellt: Das System FULCRUM beruht auf automatischer Indexierung und liefert ein nach statistischer Relevanz sortiertes Suchergebnis. Die Standardfreitextsuche des Systems MESSENGER wurde um die intellektuell vom IZ vergebenen Deskriptoren ergänzt. Die Ergebnisse zeigen, dass in FULCRUM das Boole'sche Exakt-Match-Retrieval dem Verktos-Space-Modell (Best-Match-Verfahren) von den Versuchspersonen vorgezogen wurde. Die in MESSENGER realisierte Mischform aus intellektueller und automatischer Indexierung erwies sich gegenüber dem quantitativ-statistischen Ansatz beim Recall als überlegen
  4. Losee, R.M.: Determining information retrieval and filtering performance without experimentation (1995) 0.03
    0.028695136 = product of:
      0.08608541 = sum of:
        0.08608541 = product of:
          0.12912811 = sum of:
            0.080738 = weight(_text_:retrieval in 3368) [ClassicSimilarity], result of:
              0.080738 = score(doc=3368,freq=10.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.5231199 = fieldWeight in 3368, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3368)
            0.048390117 = weight(_text_:22 in 3368) [ClassicSimilarity], result of:
              0.048390117 = score(doc=3368,freq=2.0), product of:
                0.17867287 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051022716 = queryNorm
                0.2708308 = fieldWeight in 3368, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3368)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The performance of an information retrieval or text and media filtering system may be determined through analytic methods as well as by traditional simulation or experimental methods. These analytic methods can provide precise statements about expected performance. They can thus determine which of 2 similarly performing systems is superior. For both a single query terms and for a multiple query term retrieval model, a model for comparing the performance of different probabilistic retrieval methods is developed. This method may be used in computing the average search length for a query, given only knowledge of database parameter values. Describes predictive models for inverse document frequency, binary independence, and relevance feedback based retrieval and filtering. Simulation illustrate how the single term model performs and sample performance predictions are given for single term and multiple term problems
    Date
    22. 2.1996 13:14:10
  5. ¬The Fifth Text Retrieval Conference (TREC-5) (1997) 0.03
    0.028172573 = product of:
      0.08451772 = sum of:
        0.08451772 = product of:
          0.12677658 = sum of:
            0.07147358 = weight(_text_:retrieval in 3087) [ClassicSimilarity], result of:
              0.07147358 = score(doc=3087,freq=6.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.46309367 = fieldWeight in 3087, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3087)
            0.055302992 = weight(_text_:22 in 3087) [ClassicSimilarity], result of:
              0.055302992 = score(doc=3087,freq=2.0), product of:
                0.17867287 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051022716 = queryNorm
                0.30952093 = fieldWeight in 3087, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3087)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Proceedings of the 5th TREC-confrerence held in Gaithersburgh, Maryland, Nov 20-22, 1996. Aim of the conference was discussion on retrieval techniques for large test collections. Different research groups used different techniques, such as automated thesauri, term weighting, natural language techniques, relevance feedback and advanced pattern matching, for information retrieval from the same large database. This procedure makes it possible to compare the results. The proceedings include papers, tables of the system results, and brief system descriptions including timing and storage information
  6. ¬The Eleventh Text Retrieval Conference, TREC 2002 (2003) 0.03
    0.028172573 = product of:
      0.08451772 = sum of:
        0.08451772 = product of:
          0.12677658 = sum of:
            0.07147358 = weight(_text_:retrieval in 4049) [ClassicSimilarity], result of:
              0.07147358 = score(doc=4049,freq=6.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.46309367 = fieldWeight in 4049, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4049)
            0.055302992 = weight(_text_:22 in 4049) [ClassicSimilarity], result of:
              0.055302992 = score(doc=4049,freq=2.0), product of:
                0.17867287 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051022716 = queryNorm
                0.30952093 = fieldWeight in 4049, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4049)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Proceedings of the llth TREC-conference held in Gaithersburg, Maryland (USA), November 19-22, 2002. Aim of the conference was discussion an retrieval and related information-seeking tasks for large test collection. 93 research groups used different techniques, for information retrieval from the same large database. This procedure makes it possible to compare the results. The tasks are: Cross-language searching, filtering, interactive searching, searching for novelty, question answering, searching for video shots, and Web searching.
  7. Dzeyk, W.: Effektiv und nutzerfreundlich : Einsatz von semantischen Technologien und Usability-Methoden zur Verbesserung der medizinischen Literatursuche (2010) 0.03
    0.027514528 = product of:
      0.04127179 = sum of:
        0.035253935 = weight(_text_:im in 4416) [ClassicSimilarity], result of:
          0.035253935 = score(doc=4416,freq=10.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.24442805 = fieldWeight in 4416, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4416)
        0.0060178554 = product of:
          0.018053565 = sum of:
            0.018053565 = weight(_text_:retrieval in 4416) [ClassicSimilarity], result of:
              0.018053565 = score(doc=4416,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.11697317 = fieldWeight in 4416, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4416)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    In der vorliegenden Arbeit werden die Ergebnisse des MorphoSaurus-Projekts der Deutschen Zentralbibliothek für Medizin (ZB MED) vorgestellt. Ziel des Forschungsprojekts war die substanzielle Verbesserung des Information-Retrievals der medizinischen Suchmaschine MEDPILOT mithilfe computerlinguistischer Ansätze sowie die Optimierung der Gebrauchstauglichkeit (Usability) der Suchmaschinenoberfläche. Das Projekt wurde in Kooperation mit der Averbis GmbH aus Freiburg im Zeitraum von Juni 2007 bis Dezember 2008 an der ZB MED in Köln durchgeführt. Ermöglicht wurde die Realisierung des Projekts durch eine Förderung des Paktes für Forschung und Innovation. Während Averbis die MorphoSaurus-Technologie zur Verarbeitung problematischer Sprachaspekte von Suchanfragen einbrachte und wesentliche Datenbanken der ZB MED in ein Testsystem mit moderner Suchmaschinentechnologie implementierte, evaluierte ein Team der ZB MED das Potenzial dieser Technologie. Neben einem Vergleich der Leistungsfähigkeit zwischen der bisherigen MEDPILOT-Suche und der neuen Sucharchitektur wurde ein Benchmarking mit konkurrierenden Suchmaschinen wie PubMed, Scirus, Google und Google Scholar sowie GoPubMed durchgeführt. Für die Evaluation wurden verschiedene Testkollektionen erstellt, deren Items bzw. Suchphrasen aus einer Inhaltsanalyse realer Suchanfragen des MEDPILOT-Systems gewonnen wurden. Eine Überprüfung der Relevanz der Treffer der Testsuchmaschine als wesentliches Kriterium für die Qualität der Suche zeigte folgendes Ergebnis: Durch die Anwendung der MorphoSaurus-Technologie ist eine im hohen Maße unabhängige Verarbeitung fremdsprachlicher medizinischer Inhalte möglich geworden. Darüber hinaus zeigt die neue Technik insbesondere dort ihre Stärken, wo es um die gleichwertige Verarbeitung von Laien- und Expertensprache, die Analyse von Komposita, Synonymen und grammatikalischen Varianten geht. Zudem sind Module zur Erkennung von Rechtschreibfehlern und zur Auflösung von Akronymen und medizinischen Abkürzungen implementiert worden, die eine weitere Leistungssteigerung des Systems versprechen. Ein Vergleich auf der Basis von MEDLINE-Daten zeigte: Den Suchmaschinen MED-PILOT, PubMed, GoPubMed und Scirus war die Averbis-Testsuchumgebung klar überlegen. Die Trefferrelevanz war größer, es wurden insgesamt mehr Treffer gefunden und die Anzahl der Null-Treffer-Meldungen war im Vergleich zu den anderen Suchmaschinen am geringsten.
    Bei einem Vergleich unter Berücksichtigung aller verfügbaren Quellen gelang es mithilfe der MorphoSaurus-Technik - bei wesentlich geringerem Datenbestand - ähnlich gute Resul-tate zu erzielen, wie mit den Suchmaschinen Google oder Google Scholar. Die Ergebnisse der Evaluation lassen den Schluss zu, dass durch den MorphoSaurus-Ansatz die Leistungsfähigkeit von Google oder Google Scholar im Bereich der medizinischen Literatursuche durch eine Erweiterung der vorhandenen Datenbasis sogar deutlich übertroffen werden kann. Zusätzlich zu den Retrieval-Tests wurde eine Usability-Untersuchung der Testsuchmaschine mit Probanden aus der Medizin durchgeführt. Die Testpersonen attestierten dem Such-interface eine hohe Gebrauchstauglichkeit und Nützlichkeit. Der szenariobasierte Usability-Test hat zudem gezeigt, dass die Testpersonen bzw. User die integrierten Unterstützungs-maßnahmen zur Erhöhung der Benutzerfreundlichkeit während der Suche als sehr positiv und nützlich bewerten. In der Testsuchmaschine wurde diese Unterstützung z. B. durch das Aufklappen und Präsentieren von verwandten MeSH- und ICD-10-Begriffen realisiert. Die Einführung eines Schiebereglers zur effektiven Eingrenzung des Suchraums wurde ebenfalls überwiegend positiv bewertet. Zudem wurden nach Abschicken der Suchanfrage sogenannte Verwandte Suchbegriffe aus verschiedenen medizinischen Teilbereichen angezeigt. Diese Facetten-Funktion diente der Eingrenzung bzw. Verfeinerung der Suche und wurde von den Testpersonen mehrheitlich als ein sinnvolles Hilfsangebot bewertet. Insgesamt stellt das MorphoSaurus-Projekt - mit seinem spezifischen Ansatz - ein gelungenes Beispiel für die Innovationsfähigkeit von Bibliotheken im Bereich der öffentlichen Informationsversorgung dar. Durch die mögliche Anpassung der MorphoSaurus-Technologie mittels fachspezifischer Thesauri ist zudem eine hohe Anschlussfähigkeit für Suchmaschinen-projekte anderer Inhaltsdomänen gegeben.
  8. Crestani, F.; Rijsbergen, C.J. van: Information retrieval by imaging (1996) 0.03
    0.027413448 = product of:
      0.08224034 = sum of:
        0.08224034 = product of:
          0.123360515 = sum of:
            0.081883274 = weight(_text_:retrieval in 6967) [ClassicSimilarity], result of:
              0.081883274 = score(doc=6967,freq=14.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.5305404 = fieldWeight in 6967, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6967)
            0.04147724 = weight(_text_:22 in 6967) [ClassicSimilarity], result of:
              0.04147724 = score(doc=6967,freq=2.0), product of:
                0.17867287 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23214069 = fieldWeight in 6967, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6967)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Explains briefly what constitutes the imaging process and explains how imaging can be used in information retrieval. Proposes an approach based on the concept of: 'a term is a possible world'; which enables the exploitation of term to term relationships which are estimated using an information theoretic measure. Reports results of an evaluation exercise to compare the performance of imaging retrieval, using possible world semantics, with a benchmark and using the Cranfield 2 document collection to measure precision and recall. Initially, the performance imaging retrieval was seen to be better but statistical analysis proved that the difference was not significant. The problem with imaging retrieval lies in the amount of computations needed to be performed at run time and a later experiement investigated the possibility of reducing this amount. Notes lines of further investigation
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  9. Reichert, S.; Mayr, P.: Untersuchung von Relevanzeigenschaften in einem kontrollierten Eyetracking-Experiment (2012) 0.03
    0.027235493 = product of:
      0.04085324 = sum of:
        0.027027493 = weight(_text_:im in 328) [ClassicSimilarity], result of:
          0.027027493 = score(doc=328,freq=2.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.18739122 = fieldWeight in 328, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.046875 = fieldNorm(doc=328)
        0.013825747 = product of:
          0.04147724 = sum of:
            0.04147724 = weight(_text_:22 in 328) [ClassicSimilarity], result of:
              0.04147724 = score(doc=328,freq=2.0), product of:
                0.17867287 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23214069 = fieldWeight in 328, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=328)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    In diesem Artikel wird ein Eyetracking-Experiment beschrieben, bei dem untersucht wurde, wann und auf Basis welcher Informationen Relevanzentscheidungen bei der themenbezogenen Dokumentenbewertung fallen und welche Faktoren auf die Relevanzentscheidung einwirken. Nach einer kurzen Einführung werden relevante Studien aufgeführt, in denen Eyetracking als Untersuchungsmethode für Interaktionsverhalten mit Ergebnislisten (Information Seeking Behavior) verwendet wurde. Nutzerverhalten wird hierbei vor allem durch unterschiedliche Aufgaben-Typen, dargestellte Informationen und durch das Ranking eines Ergebnisses beeinflusst. Durch EyetrackingUntersuchungen lassen sich Nutzer außerdem in verschiedene Klassen von Bewertungs- und Lesetypen einordnen. Diese Informationen können als implizites Feedback genutzt werden, um so die Suche zu personalisieren und um die Relevanz von Suchergebnissen ohne aktives Zutun des Users zu erhöhen. In einem explorativen Eyetracking-Experiment mit 12 Studenten der Hochschule Darmstadt werden anhand der Länge der Gesamtbewertung, Anzahl der Fixationen, Anzahl der besuchten Metadatenelemente und Länge des Scanpfades zwei typische Bewertungstypen identifiziert. Das Metadatenfeld Abstract wird im Experiment zuverlässig als wichtigste Dokumenteigenschaft für die Zuordnung von Relevanz ermittelt.
    Date
    22. 7.2012 19:25:54
  10. Allan, J.; Callan, J.P.; Croft, W.B.; Ballesteros, L.; Broglio, J.; Xu, J.; Shu, H.: INQUERY at TREC-5 (1997) 0.03
    0.026824525 = product of:
      0.08047357 = sum of:
        0.08047357 = product of:
          0.12071036 = sum of:
            0.051581617 = weight(_text_:retrieval in 3103) [ClassicSimilarity], result of:
              0.051581617 = score(doc=3103,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.33420905 = fieldWeight in 3103, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3103)
            0.06912874 = weight(_text_:22 in 3103) [ClassicSimilarity], result of:
              0.06912874 = score(doc=3103,freq=2.0), product of:
                0.17867287 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051022716 = queryNorm
                0.38690117 = fieldWeight in 3103, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3103)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Date
    27. 2.1999 20:55:22
    Source
    The Fifth Text Retrieval Conference (TREC-5). Ed.: E.M. Voorhees u. D.K. Harman
  11. Ng, K.B.; Loewenstern, D.; Basu, C.; Hirsh, H.; Kantor, P.B.: Data fusion of machine-learning methods for the TREC5 routing tak (and other work) (1997) 0.03
    0.026824525 = product of:
      0.08047357 = sum of:
        0.08047357 = product of:
          0.12071036 = sum of:
            0.051581617 = weight(_text_:retrieval in 3107) [ClassicSimilarity], result of:
              0.051581617 = score(doc=3107,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.33420905 = fieldWeight in 3107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3107)
            0.06912874 = weight(_text_:22 in 3107) [ClassicSimilarity], result of:
              0.06912874 = score(doc=3107,freq=2.0), product of:
                0.17867287 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051022716 = queryNorm
                0.38690117 = fieldWeight in 3107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3107)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Date
    27. 2.1999 20:59:22
    Source
    The Fifth Text Retrieval Conference (TREC-5). Ed.: E.M. Voorhees u. D.K. Harman
  12. Saracevic, T.: On a method for studying the structure and nature of requests in information retrieval (1983) 0.03
    0.026824525 = product of:
      0.08047357 = sum of:
        0.08047357 = product of:
          0.12071036 = sum of:
            0.051581617 = weight(_text_:retrieval in 2417) [ClassicSimilarity], result of:
              0.051581617 = score(doc=2417,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.33420905 = fieldWeight in 2417, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2417)
            0.06912874 = weight(_text_:22 in 2417) [ClassicSimilarity], result of:
              0.06912874 = score(doc=2417,freq=2.0), product of:
                0.17867287 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051022716 = queryNorm
                0.38690117 = fieldWeight in 2417, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2417)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Pages
    S.22-25
  13. Hodges, P.R.: Keyword in title indexes : effectiveness of retrieval in computer searches (1983) 0.03
    0.026800975 = product of:
      0.080402926 = sum of:
        0.080402926 = product of:
          0.12060438 = sum of:
            0.07221426 = weight(_text_:retrieval in 5001) [ClassicSimilarity], result of:
              0.07221426 = score(doc=5001,freq=8.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.46789268 = fieldWeight in 5001, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5001)
            0.048390117 = weight(_text_:22 in 5001) [ClassicSimilarity], result of:
              0.048390117 = score(doc=5001,freq=2.0), product of:
                0.17867287 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051022716 = queryNorm
                0.2708308 = fieldWeight in 5001, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5001)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    A study was done to test the effectiveness of retrieval using title word searching. It was based on actual search profiles used in the Mechanized Information Center at Ohio State University, in order ro replicate as closely as possible actual searching conditions. Fewer than 50% of the relevant titles were retrieved by keywords in titles. The low rate of retrieval can be attributes to three sources: titles themselves, user and information specialist ignorance of the subject vocabulary in use, and to general language problems. Across fields it was found that the social sciences had the best retrieval rate, with science having the next best, and arts and humanities the lowest. Ways to enhance and supplement keyword in title searching on the computer and in printed indexes are discussed.
    Date
    14. 3.1996 13:22:21
  14. Shenouda, W.: Online bibliographic searching : how end-users modify their search strategies (1990) 0.03
    0.026084248 = product of:
      0.07825274 = sum of:
        0.07825274 = product of:
          0.11737911 = sum of:
            0.081271976 = weight(_text_:online in 4895) [ClassicSimilarity], result of:
              0.081271976 = score(doc=4895,freq=10.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.524847 = fieldWeight in 4895, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4895)
            0.03610713 = weight(_text_:retrieval in 4895) [ClassicSimilarity], result of:
              0.03610713 = score(doc=4895,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23394634 = fieldWeight in 4895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4895)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The study attempted to idendify how end-users modify their initial search strategies in the light of new information presented during their interaction with an online bibliographic information retrieval system in a real environment. This exploratory study was also conducted to determine the effectiveness of the changes, made by users during the online process, in retrieving relevant documents. Analysis of this data shows that all end-users modify their searches during the online process. Results indicate that certain changes were made more frequently than others. Changes affecting relevance and characteristics of end-users' online search behaviour were also identified
  15. Kaltenborn, K.-F.: Endnutzerrecherchen in der CD-ROM-Datenbank Medline : T.1: Evaluations- und Benutzerforschung über Nutzungscharakteristika, Bewertung der Rechercheergebnisse und künftige Informationsgewinnung; T.2: Evaluations- und Benutzerforschung über Recherchequalität und Nutzer-Computer/Datenbank-Interaktion (1991) 0.02
    0.024941362 = product of:
      0.037412044 = sum of:
        0.027027493 = weight(_text_:im in 5105) [ClassicSimilarity], result of:
          0.027027493 = score(doc=5105,freq=2.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.18739122 = fieldWeight in 5105, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.046875 = fieldNorm(doc=5105)
        0.010384551 = product of:
          0.031153653 = sum of:
            0.031153653 = weight(_text_:online in 5105) [ClassicSimilarity], result of:
              0.031153653 = score(doc=5105,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.20118743 = fieldWeight in 5105, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5105)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    Die Einführung von CD-ROM Datenbanken als neue Informationstechnologie hat die Formen der Informationsgewinnung in bestimmten wissenschaftlichen Fachgebieten grundlegend geändert. Der Beitrag berichtet über Ergebnisse wissenschaftlicher Begleitforschung bei Endnutzerrecherchen in der CD-ROM-Version der Datenbank MEDLINE. Hierzu wurden drei verschiedene Erhebungen durchgeführt. Hiernach ist die überwiegende Zahl der Endnutzer (89,3%) mit dem jeweiligen Rechercheresultat zufrieden, wobei Benutzer mit geringer Rechercheerfahrung eine höhere Zufriedenheitsrate erreichen als Benutzer mit umfangreicheren Recherchekenntnissen. Die Gründe zur Nutzung von CD-ROM-Systemen resultieren voriwegend aus der klinischen Alltagsroutine oder täglichen Forschungspraxis, während vermittelte Online-Literatursuchen tendenziell häufiger im Zusammenhang mit einmaligen Ereignissen der wissenschaftlichen Aus- und Weiterbildung stehen. Die selbständige CD-ROM Literaturrecherche stellt für die befragten Ärzte und Wissenschaftler die bevorzugte Methode der Informationsgewinnung dar. Die analysierten Endnutzerrecherchen weisen allerdings Fehler und Defizite hinsichtlich einer optimalen Suchstrategie auf, die zu unbemerktn Informationsverlusten und zu Fehlbeurteilungen des wissenschaftlichen Kenntnisstandes führen
  16. Barker, A.L.: Non-Boolean searching on commercial online systems : optimising use of Dialog TARGET and ESA/IRS QUESTQUORUM (1995) 0.02
    0.024177555 = product of:
      0.07253266 = sum of:
        0.07253266 = product of:
          0.10879899 = sum of:
            0.07269186 = weight(_text_:online in 3853) [ClassicSimilarity], result of:
              0.07269186 = score(doc=3853,freq=8.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.46943733 = fieldWeight in 3853, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3853)
            0.03610713 = weight(_text_:retrieval in 3853) [ClassicSimilarity], result of:
              0.03610713 = score(doc=3853,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23394634 = fieldWeight in 3853, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3853)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Considers 2 non-Boolean searching systems available on commercial online systems. QUESTQUORUM, based on coordination level searching, was introduced by ESA/IRS in Dec. 85. TARGET, which employs partial match probabilistic retrieval was introduced by DIALOG in Dec 93. 6 subject searches were carried out on databases available on both Dialog and ESA/IRS to compare TARGET and QUESTQUORUM with Boolean searching. Outlines the main advantages of these tools, and their disadvantages. Suggests when their use may be preferable
    Source
    Online information 95: Proceedings of the 19th International online information meeting, London, 5-7 December 1995. Ed.: D.I. Raitt u. B. Jeapes
  17. Behnert, C.; Lewandowski, D.: ¬A framework for designing retrieval effectiveness studies of library information systems using human relevance assessments (2017) 0.02
    0.02389313 = product of:
      0.07167938 = sum of:
        0.07167938 = product of:
          0.10751907 = sum of:
            0.025961377 = weight(_text_:online in 3700) [ClassicSimilarity], result of:
              0.025961377 = score(doc=3700,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.16765618 = fieldWeight in 3700, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3700)
            0.08155769 = weight(_text_:retrieval in 3700) [ClassicSimilarity], result of:
              0.08155769 = score(doc=3700,freq=20.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.5284309 = fieldWeight in 3700, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3700)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose This paper demonstrates how to apply traditional information retrieval evaluation methods based on standards from the Text REtrieval Conference (TREC) and web search evaluation to all types of modern library information systems including online public access catalogs, discovery systems, and digital libraries that provide web search features to gather information from heterogeneous sources. Design/methodology/approach We apply conventional procedures from information retrieval evaluation to the library information system context considering the specific characteristics of modern library materials. Findings We introduce a framework consisting of five parts: (1) search queries, (2) search results, (3) assessors, (4) testing, and (5) data analysis. We show how to deal with comparability problems resulting from diverse document types, e.g., electronic articles vs. printed monographs and what issues need to be considered for retrieval tests in the library context. Practical implications The framework can be used as a guideline for conducting retrieval effectiveness studies in the library context. Originality/value Although a considerable amount of research has been done on information retrieval evaluation, and standards for conducting retrieval effectiveness studies do exist, to our knowledge this is the first attempt to provide a systematic framework for evaluating the retrieval effectiveness of twenty-first-century library information systems. We demonstrate which issues must be considered and what decisions must be made by researchers prior to a retrieval test.
  18. Tibbo, H.R.: ¬The epic struggle : subject retrieval from large bibliographic databases (1994) 0.02
    0.022301702 = product of:
      0.0669051 = sum of:
        0.0669051 = product of:
          0.10035765 = sum of:
            0.031153653 = weight(_text_:online in 2179) [ClassicSimilarity], result of:
              0.031153653 = score(doc=2179,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.20118743 = fieldWeight in 2179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2179)
            0.069204 = weight(_text_:retrieval in 2179) [ClassicSimilarity], result of:
              0.069204 = score(doc=2179,freq=10.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.44838852 = fieldWeight in 2179, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2179)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Discusses a retrieval study that focused on collection level archival records in the OCLC OLUC, made accessible through the EPIC online search system. Data were also collected from the local OPAC at North Carolina University at Chapel Hill (UNC-CH) in which UNC-CH produced OCLC records are loaded. The chief objective was to explore the retrieval environments in which a random sample of USMARC AMC records produced at UNC-CH were found: specifically to obtain a picture of the density of these databases in regard to each subject heading applied and, more generally, for each records. Key questions were: how many records would be retrieved for each subject heading attached to each of the records; and what was the nature of these subject headings vis a vis the numer of hits associated with them. Results show that large retrieval sets are a potential problem with national bibliographic utilities and that the local and national retrieval environments can vary greatly. The need for specifity in indexing is emphasized
  19. Borgman, C.L.: Why are online catalogs still hard to use? (1996) 0.02
    0.022224266 = product of:
      0.066672795 = sum of:
        0.066672795 = product of:
          0.10000919 = sum of:
            0.058743894 = weight(_text_:online in 4380) [ClassicSimilarity], result of:
              0.058743894 = score(doc=4380,freq=16.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.37936267 = fieldWeight in 4380, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4380)
            0.041265294 = weight(_text_:retrieval in 4380) [ClassicSimilarity], result of:
              0.041265294 = score(doc=4380,freq=8.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.26736724 = fieldWeight in 4380, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4380)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    We return to arguments made 10 years ago that online catalogs are difficult to use because their design does not incorporate sufficient understanding of searching behavior. The earlier article examined studies of information retrieval system searching for their implications for online catalog design; this article examines the implications of card catalog design for online catalogs. With this analysis, we hope to contribute to a better understanding of user behavior and to lay to rest the card catalog design model for online catalogs. We discuss the problems with query matching systems, which were designed for skilled search intermediaries rather than end-users, and the knowledge and skills they require in the information-seeking process, illustrated with examples of searching card and online catalogs. Searching requires conceptual knowledge of the information retrieval process - translating an information need into a searchable query; semantic knowledge of how to implement a query in a given system - the how and when to use system features; and technical skills in executing the query - basic computing skills and the syntax of entering queries as specific search statements. In the short term, we can help make online catalogs easier to use through improved training and documentation that is based on information-seeking bahavior, with the caveat that good training is not a substitute for good system design. Our long term goal should be to design intuitive systems that require a minimum of instruction. Given the complexity of the information retrieval problem and the limited capabilities of today's systems, we are far from achieving that goal. If libraries are to provide primary information services for the networked world, they need to put research results on the information-seeking process into practice in designing the next generation of online public access information retrieval systems
  20. Bates, M.J.: Document familiarity, relevance, and Bradford's law : the Getty Online Searching Project report; no.5 (1996) 0.02
    0.022224266 = product of:
      0.066672795 = sum of:
        0.066672795 = product of:
          0.10000919 = sum of:
            0.058743894 = weight(_text_:online in 6978) [ClassicSimilarity], result of:
              0.058743894 = score(doc=6978,freq=4.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.37936267 = fieldWeight in 6978, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6978)
            0.041265294 = weight(_text_:retrieval in 6978) [ClassicSimilarity], result of:
              0.041265294 = score(doc=6978,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.26736724 = fieldWeight in 6978, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6978)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The Getty Online Searching Project studied the end user searching behaviour of 27 humanities scholars over a 2 year period. A number of scholars anticipated that they were already familiar with a percentage of records their searches retrieved. High document familiarity can be a significant factor in searching: Draws implications regarding the impact of high document familiarity on relevance and information retrieval theory. Makes speculations regarding high document familiarity and Bradford's law

Languages

Types

  • a 378
  • s 15
  • el 8
  • m 8
  • r 6
  • x 5
  • p 2
  • d 1
  • More… Less…