Search (103 results, page 1 of 6)

  • × theme_ss:"Retrievalstudien"
  1. MacCall, S.L.; Cleveland, A.D.; Gibson, I.E.: Outline and preliminary evaluation of the classical digital library model (1999) 0.01
    0.013610008 = product of:
      0.040830024 = sum of:
        0.03206805 = weight(_text_:internet in 6541) [ClassicSimilarity], result of:
          0.03206805 = score(doc=6541,freq=6.0), product of:
            0.11352337 = queryWeight, product of:
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.038453303 = queryNorm
            0.28247973 = fieldWeight in 6541, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6541)
        0.008761973 = product of:
          0.026285918 = sum of:
            0.026285918 = weight(_text_:29 in 6541) [ClassicSimilarity], result of:
              0.026285918 = score(doc=6541,freq=2.0), product of:
                0.13526669 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038453303 = queryNorm
                0.19432661 = fieldWeight in 6541, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6541)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    The growing number of networked information resources and services offers unprecedented opportunities for delivering high quality information to the computer desktop of a wide range of individuals. However, currently there is a reliance on a database retrieval model, in which endusers use keywords to search large collections of automatically indexed resources in order to find needed information. As an alternative to the database retrieval model, this paper outlines the classical digital library model, which is derived from traditional practices of library and information science professionals. These practices include the selection and organization of information resources for local populations of users and the integration of advanced information retrieval tools, such as databases and the Internet into these collections. To evaluate this model, library and information professionals and endusers involved with primary care medicine were asked to respond to a series of questions comparing their experiences with a digital library developed for the primary care population to their experiences with general Internet use. Preliminary results are reported
    Date
    29. 9.2001 20:12:49
    Theme
    Internet
  2. Effektive Information Retrieval Verfahren in Theorie und Praxis : ausgewählte und erweiterte Beiträge des Vierten Hildesheimer Evaluierungs- und Retrievalworkshop (HIER 2005), Hildesheim, 20.7.2005 (2006) 0.01
    0.010242638 = product of:
      0.030727914 = sum of:
        0.020254532 = weight(_text_:bibliothek in 5973) [ClassicSimilarity], result of:
          0.020254532 = score(doc=5973,freq=4.0), product of:
            0.1578712 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.038453303 = queryNorm
            0.12829782 = fieldWeight in 5973, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.015625 = fieldNorm(doc=5973)
        0.010473382 = weight(_text_:internet in 5973) [ClassicSimilarity], result of:
          0.010473382 = score(doc=5973,freq=4.0), product of:
            0.11352337 = queryWeight, product of:
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.038453303 = queryNorm
            0.09225749 = fieldWeight in 5973, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.015625 = fieldNorm(doc=5973)
      0.33333334 = coord(2/6)
    
    Abstract
    Information Retrieval hat sich zu einer Schlüsseltechnologie in der Wissensgesellschaft entwickelt. Die Anzahl der täglichen Anfragen an Internet-Suchmaschinen bildet nur einen Indikator für die große Bedeutung dieses Themas. Der Sammelbandband informiert über Themen wie Information Retrieval-Grundlagen, Retrieval Systeme, Digitale Bibliotheken, Evaluierung und Multilinguale Systeme, beschreibt Anwendungsszenarien und setzt sich mit neuen Herausforderungen an das Information Retrieval auseinander. Die Beiträge behandeln aktuelle Themen und neue Herausforderungen an das Information Retrieval. Die intensive Beteiligung der Informationswissenschaft der Universität Hildesheim am Cross Language Evaluation Forum (CLEF), einer europäischen Evaluierungsinitiative zur Erforschung mehrsprachiger Retrieval Systeme, berührt mehrere der Beiträge. Ebenso spielen Anwendungsszenarien und die Auseinandersetzung mit aktuellen und praktischen Fragestellungen eine große Rolle.
    Footnote
    Rez. in: Information - Wissenschaft und Praxis 57(2006) H.5, S.290-291 (C. Schindler): "Weniger als ein Jahr nach dem "Vierten Hildesheimer Evaluierungs- und Retrievalworkshop" (HIER 2005) im Juli 2005 ist der dazugehörige Tagungsband erschienen. Eingeladen hatte die Hildesheimer Informationswissenschaft um ihre Forschungsergebnisse und die einiger externer Experten zum Thema Information Retrieval einem Fachpublikum zu präsentieren und zur Diskussion zu stellen. Unter dem Titel "Effektive Information Retrieval Verfahren in Theorie und Praxis" sind nahezu sämtliche Beiträge des Workshops in dem nun erschienenen, 15 Beiträge umfassenden Band gesammelt. Mit dem Schwerpunkt Information Retrieval (IR) wird ein Teilgebiet der Informationswissenschaft vorgestellt, das schon immer im Zentrum informationswissenschaftlicher Forschung steht. Ob durch den Leistungsanstieg von Prozessoren und Speichermedien, durch die Verbreitung des Internet über nationale Grenzen hinweg oder durch den stetigen Anstieg der Wissensproduktion, festzuhalten ist, dass in einer zunehmend wechselseitig vernetzten Welt die Orientierung und das Auffinden von Dokumenten in großen Wissensbeständen zu einer zentralen Herausforderung geworden sind. Aktuelle Verfahrensweisen zu diesem Thema, dem Information Retrieval, präsentiert der neue Band anhand von praxisbezogenen Projekten und theoretischen Diskussionen. Das Kernthema Information Retrieval wird in dem Sammelband in die Bereiche Retrieval-Systeme, Digitale Bibliothek, Evaluierung und Multilinguale Systeme untergliedert. Die Artikel der einzelnen Sektionen sind insgesamt recht heterogen und bieten daher keine Überschneidungen inhaltlicher Art. Jedoch ist eine vollkommene thematische Abdeckung der unterschiedlichen Bereiche ebenfalls nicht gegeben, was bei der Präsentation von Forschungsergebnissen eines Institutes und seiner Kooperationspartner auch nur bedingt erwartet werden kann. So lässt sich sowohl in der Gliederung als auch in den einzelnen Beiträgen eine thematische Verdichtung erkennen, die das spezielle Profil und die Besonderheit der Hildesheimer Informationswissenschaft im Feld des Information Retrieval wiedergibt. Teil davon ist die mehrsprachige und interdisziplinäre Ausrichtung, die die Schnittstellen zwischen Informationswissenschaft, Sprachwissenschaft und Informatik in ihrer praxisbezogenen und internationalen Forschung fokussiert.
    Im ersten Kapitel "Retrieval-Systeme" werden verschiedene Information RetrievalSysteme präsentiert und Verfahren zu deren Gestaltung diskutiert. Jan-Hendrik Scheufen stellt das Meta-Framework RECOIN zur Information Retrieval Forschung vor, das sich durch eine flexible Handhabung unterschiedlichster Applikationen auszeichnet und dadurch eine zentrierte Protokollierung und Steuerung von Retrieval-Prozessen ermöglicht. Dieses Konzept eines offenen, komponentenbasierten Systems wurde in Form eines Plug-Ins für die javabasierte Open-Source-Plattform Eclipse realisiert. Markus Nick und Klaus-Dieter Althoff erläutern in ihrem Beitrag, der übrigens der einzige englischsprachige Text im Buch ist, das Verfahren DILLEBIS zur Erhaltung und Pflege (Maintenance) von erfahrungsbasierten Informationssystemen. Sie bezeichnen dieses Verfahren als Maintainable Experience-based Information System und plädieren für eine Ausrichtung von erfahrungsbasierten Systemen entsprechend diesem Modell. Gesine Quint und Steffen Weichert stellen dagegen in ihrem Beitrag die benutzerzentrierte Entwicklung des Produkt-Retrieval-Systems EIKON vor, das in Kooperation mit der Blaupunkt GmbH realisiert wurde. In einem iterativen Designzyklus erfolgte die Gestaltung von gruppenspezifischen Interaktionsmöglichkeiten für ein Car-Multimedia-Zubehör-System. Im zweiten Kapitel setzen sich mehrere Autoren dezidierter mit dem Anwendungsgebiet "Digitale Bibliothek" auseinander. Claus-Peter Klas, Sascha Kriewel, Andre Schaefer und Gudrun Fischer von der Universität Duisburg-Essen stellen das System DAFFODIL vor, das durch eine Vielzahl an Werkzeugen zur strategischen Unterstützung bei Literaturrecherchen in digitalen Bibliotheken dient. Zusätzlich ermöglicht die Protokollierung sämtlicher Ereignisse den Einsatz des Systems als Evaluationsplattform. Der Aufsatz von Matthias Meiert erläutert die Implementierung von elektronischen Publikationsprozessen an Hochschulen am Beispiel von Abschlussarbeiten des Studienganges Internationales Informationsmanagement der Universität Hildesheim. Neben Rahmenbedingungen werden sowohl der Ist-Zustand als auch der Soll-Zustand des wissenschaftlichen elektronischen Publizierens in Form von gruppenspezifischen Empfehlungen dargestellt. Daniel Harbig und Rene Schneider beschreiben in ihrem Aufsatz zwei Verfahrensweisen zum maschinellen Erlernen von Ontologien, angewandt am virtuellen Bibliotheksregal MyShelf. Nach der Evaluation dieser beiden Ansätze plädieren die Autoren für ein semi-automatisiertes Verfahren zur Erstellung von Ontologien.
  3. Heinz, M.; Voigt, H.: Aufbau einer Suchmaschine für ein Forschungsinstitut : Aufgabe für die Bibliothek? (2000) 0.01
    0.009548077 = product of:
      0.057288464 = sum of:
        0.057288464 = weight(_text_:bibliothek in 5234) [ClassicSimilarity], result of:
          0.057288464 = score(doc=5234,freq=2.0), product of:
            0.1578712 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.038453303 = queryNorm
            0.36288103 = fieldWeight in 5234, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.0625 = fieldNorm(doc=5234)
      0.16666667 = coord(1/6)
    
  4. Grummann, M.: Sind Verfahren zur maschinellen Indexierung für Literaturbestände Öffentlicher Bibliotheken geeignet? : Retrievaltests von indexierten ekz-Daten mit der Software IDX (2000) 0.01
    0.009548077 = product of:
      0.057288464 = sum of:
        0.057288464 = weight(_text_:bibliothek in 1879) [ClassicSimilarity], result of:
          0.057288464 = score(doc=1879,freq=2.0), product of:
            0.1578712 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.038453303 = queryNorm
            0.36288103 = fieldWeight in 1879, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.0625 = fieldNorm(doc=1879)
      0.16666667 = coord(1/6)
    
    Source
    Bibliothek: Forschung und Praxis. 24(2000) H.3, S.297-318
  5. Rijsbergen, C.J. van: ¬A test for the separation of relevant and non-relevant documents in experimental retrieval collections (1973) 0.01
    0.009304067 = product of:
      0.0558244 = sum of:
        0.0558244 = product of:
          0.0837366 = sum of:
            0.042057466 = weight(_text_:29 in 5002) [ClassicSimilarity], result of:
              0.042057466 = score(doc=5002,freq=2.0), product of:
                0.13526669 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038453303 = queryNorm
                0.31092256 = fieldWeight in 5002, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5002)
            0.041679136 = weight(_text_:22 in 5002) [ClassicSimilarity], result of:
              0.041679136 = score(doc=5002,freq=2.0), product of:
                0.13465692 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038453303 = queryNorm
                0.30952093 = fieldWeight in 5002, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5002)
          0.6666667 = coord(2/3)
      0.16666667 = coord(1/6)
    
    Date
    19. 3.1996 11:22:12
    Source
    Journal of documentation. 29(1973) no.3, S.251-257
  6. MacCall, S.L.; Cleveland, A.D.: ¬A relevance-based quantitative measure for Internet information retrieval evaluation (1999) 0.01
    0.008279935 = product of:
      0.04967961 = sum of:
        0.04967961 = weight(_text_:internet in 6689) [ClassicSimilarity], result of:
          0.04967961 = score(doc=6689,freq=10.0), product of:
            0.11352337 = queryWeight, product of:
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.038453303 = queryNorm
            0.43761572 = fieldWeight in 6689, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.046875 = fieldNorm(doc=6689)
      0.16666667 = coord(1/6)
    
    Abstract
    An important indicator of a maturating Internet is the development of metrics for its evaluation as a practical tool for enduser information retrieval. However, the Internet presents specific problems for traditional IR measures, such as the need to deal with the variety of classes of retrieval tools. This paper presents a metric for comparing the performance of common classes of Internet information retrieval tool, including human indexed catalogs of web resources and automatically indexed databases of web pages. The metric uses a relevance-based quantitative measure to compare the performance of endusers using these Internet information retrieval tools. The benefit of the proposed metric is that it is relevance-based (using enduser relevance judgments), and it facilitates the comparison of the performance of different classes of IIR tools
  7. Mettrop, W.; Nieuwenhuysen, P.: Internet search engines : fluctuations in document accessibility (2001) 0.01
    0.0075585125 = product of:
      0.045351073 = sum of:
        0.045351073 = weight(_text_:internet in 4481) [ClassicSimilarity], result of:
          0.045351073 = score(doc=4481,freq=12.0), product of:
            0.11352337 = queryWeight, product of:
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.038453303 = queryNorm
            0.39948666 = fieldWeight in 4481, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4481)
      0.16666667 = coord(1/6)
    
    Abstract
    An empirical investigation of the consistency of retrieval through Internet search engines is reported. Thirteen engines are evaluated: AltaVista, EuroFerret, Excite, HotBot, InfoSeek, Lycos, MSN, NorthernLight, Snap, WebCrawler and three national Dutch engines: Ilse, Search.nl and Vindex. The focus is on a characteristics related to size: the degree of consistency to which an engine retrieves documents. Does an engine always present the same relevant documents that are, or were, available in its databases? We observed and identified three types of fluctuations in the result sets of several kinds of searches, many of them significant. These should be taken into account by users who apply an Internet search engine, for instance to retrieve as many relevant documents as possible, or to retrieve a document that was already found in a previous search, or to perform scientometric/bibliometric measurements. The fluctuations should also be considered as a complication of other research on the behaviour and performance of Internet search engines. In conclusion: in view of the increasing importance of the Internet as a publication/communication medium, the fluctuations in the result sets of Internet search engines can no longer be neglected.
  8. Harter, S.P.; Hert, C.A.: Evaluation of information retrieval systems : approaches, issues, and methods (1997) 0.01
    0.0074825455 = product of:
      0.044895273 = sum of:
        0.044895273 = weight(_text_:internet in 2264) [ClassicSimilarity], result of:
          0.044895273 = score(doc=2264,freq=6.0), product of:
            0.11352337 = queryWeight, product of:
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.038453303 = queryNorm
            0.39547163 = fieldWeight in 2264, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2264)
      0.16666667 = coord(1/6)
    
    Abstract
    State of the art review of information retrieval systems, defined as systems retrieving documents a sopposed to numerical data. Explains the classic Cranfield studies that have served as a standard for retrieval testing since the 1960s and discusses the Cranfield model and its relevance based measures of retrieval effectiveness. Details sosme of the problems with the Cranfield instruments and issues of validity and reliability, generalizability, usefulness and basic concepts. Discusses the evaluation of the Internet search engines in light of the Cranfield model, noting the very real differences between batch systems (Cranfield) and interactive systems (Internet). Because the Internet collection is not fixed, it is impossible to determine recall as a measure of retrieval effectiveness. considers future directions in evaluating information retrieval systems
  9. Mandl, T.: Web- und Multimedia-Dokumente : Neuere Entwicklungen bei der Evaluierung von Information Retrieval Systemen (2003) 0.01
    0.006982255 = product of:
      0.041893527 = sum of:
        0.041893527 = weight(_text_:internet in 1734) [ClassicSimilarity], result of:
          0.041893527 = score(doc=1734,freq=4.0), product of:
            0.11352337 = queryWeight, product of:
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.038453303 = queryNorm
            0.36902997 = fieldWeight in 1734, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.0625 = fieldNorm(doc=1734)
      0.16666667 = coord(1/6)
    
    Abstract
    Die Menge an Daten im Internet steigt weiter rapide an. Damit wächst auch der Bedarf an qualitativ hochwertigen Information Retrieval Diensten zur Orientierung und problemorientierten Suche. Die Entscheidung für die Benutzung oder Beschaffung von Information Retrieval Software erfordert aussagekräftige Evaluierungsergebnisse. Dieser Beitrag stellt neuere Entwicklungen bei der Evaluierung von Information Retrieval Systemen vor und zeigt den Trend zu Spezialisierung und Diversifizierung von Evaluierungsstudien, die den Realitätsgrad derErgebnisse erhöhen. DerSchwerpunkt liegt auf dem Retrieval von Fachtexten, Internet-Seiten und Multimedia-Objekten.
  10. Davis, C.H.: From document retrieval to Web browsing : some universal concerns (1997) 0.01
    0.0061094724 = product of:
      0.036656834 = sum of:
        0.036656834 = weight(_text_:internet in 399) [ClassicSimilarity], result of:
          0.036656834 = score(doc=399,freq=4.0), product of:
            0.11352337 = queryWeight, product of:
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.038453303 = queryNorm
            0.32290122 = fieldWeight in 399, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.0546875 = fieldNorm(doc=399)
      0.16666667 = coord(1/6)
    
    Abstract
    Computer based systems can produce enourmous retrieval sets even when good search logic is used. Sometimes this is desirable, more often it is not. Appropriate filters can limit search results, but they represent only a partial solution. Simple ranking techniques are needed that are both effective and easily understood by the humans doing the searching. Optimal search output, whether from a traditional database or the Internet, will result when intuitive interfaces are designed that inspire confidence while making the necessary mathematics transparent. Weighted term searching using powers of 2, a technique proposed early in the history of information retrieval, can be simplifies and used in combination with modern graphics and textual input to achieve these results
    Theme
    Internet
  11. Chu, H.: Factors affecting relevance judgment : a report from TREC Legal track (2011) 0.01
    0.005815042 = product of:
      0.034890253 = sum of:
        0.034890253 = product of:
          0.052335378 = sum of:
            0.026285918 = weight(_text_:29 in 4540) [ClassicSimilarity], result of:
              0.026285918 = score(doc=4540,freq=2.0), product of:
                0.13526669 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038453303 = queryNorm
                0.19432661 = fieldWeight in 4540, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4540)
            0.02604946 = weight(_text_:22 in 4540) [ClassicSimilarity], result of:
              0.02604946 = score(doc=4540,freq=2.0), product of:
                0.13465692 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038453303 = queryNorm
                0.19345059 = fieldWeight in 4540, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4540)
          0.6666667 = coord(2/3)
      0.16666667 = coord(1/6)
    
    Date
    12. 7.2011 18:29:22
  12. Kaizik, A.; Gödert, W.; Oßwald, A.: Evaluation von Subject Gateways des Internet (EJECT) : Projektbericht (2001) 0.01
    0.005236691 = product of:
      0.031420145 = sum of:
        0.031420145 = weight(_text_:internet in 1476) [ClassicSimilarity], result of:
          0.031420145 = score(doc=1476,freq=4.0), product of:
            0.11352337 = queryWeight, product of:
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.038453303 = queryNorm
            0.27677247 = fieldWeight in 1476, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.046875 = fieldNorm(doc=1476)
      0.16666667 = coord(1/6)
    
    Abstract
    Der Umfang und die Heterogenität des Informationsangebotes erfordert immer differenzierte Methoden und Hilfsmittel für das gezielte und möglichst ballastfreie Auffinden von Informationsquellen im Kontext eines bestimmten Fachgebietes oder einer wissenschaftlichen Disziplin. Um dieses Ziel zu errei-chen, wurden in der letzten Zeit eine Reihe sog. Subject Gateways entwickelt. Bislang liegen weder viele Untersuchungen zur Qualität derartiger Hilfsmittel vor noch ist eine differenzierte Methodik für solche Bewertungen entwickelt worden. Das Projekt Evaluation von Subject Gateways des Internet (EJECT) verfolgte daher die Ziele:· Durch Analyse bereits realisierter Subject Gateways die Verwendungsvielfalt des Begriffes aufzuzeigen und zu einer Präzisierung der Begriffsbildung beizutragen; Einen methodischen Weg zur qualitativen Bewertung von Subject Gateways aufzuzeigen;· Diesen Weg anhand einer Evaluation des Subject Gateways EULER zu testen, das im Rahmen eines EU-Projektes für das Fachgebiet Mathematik entwickelt wurde. Die Resultate der Evaluation werden in dieser Studie ausführlich vorgestellt und es wird aufgezeigt, inwieweit eine Übertragung auf die Bewertung anderer Gateways möglich ist.
  13. Oppenheim, C.; Morris, A.; McKnight, C.: ¬The evaluation of WWW search engines (2000) 0.01
    0.005236691 = product of:
      0.031420145 = sum of:
        0.031420145 = weight(_text_:internet in 4546) [ClassicSimilarity], result of:
          0.031420145 = score(doc=4546,freq=4.0), product of:
            0.11352337 = queryWeight, product of:
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.038453303 = queryNorm
            0.27677247 = fieldWeight in 4546, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.046875 = fieldNorm(doc=4546)
      0.16666667 = coord(1/6)
    
    Abstract
    The literature of the evaluation of Internet search engines is reviewed. Although there have been many studies, there has been little consistency in the way such studies have been carried out. This problem is exacerbated by the fact that recall is virtually impossible to calculate in the fast changing Internet environment, and therefore the traditional Cranfield type of evaluation is not usually possible. A variety of alternative evaluation methods has been suggested to overcome this difficulty. The authors recommend that a standardised set of tools is developed for the evaluation of web search engines so that, in future, comparisons can be made between search engines more effectively, and that variations in performance of any given search engine over time can be tracked. The paper itself does not provide such a standard set of tools, but it investigates the issues and makes preliminary recommendations of the types of tools needed
  14. Gilchrist, A.: Research and consultancy (1998) 0.00
    0.0049371994 = product of:
      0.029623196 = sum of:
        0.029623196 = weight(_text_:internet in 1394) [ClassicSimilarity], result of:
          0.029623196 = score(doc=1394,freq=2.0), product of:
            0.11352337 = queryWeight, product of:
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.038453303 = queryNorm
            0.2609436 = fieldWeight in 1394, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.0625 = fieldNorm(doc=1394)
      0.16666667 = coord(1/6)
    
    Abstract
    State of the art review of literature published about research and consultancy in library and information science (LIS). Issues covered include: scope and definitions of what constitutes research and consultancy; funding of research and development; national LIS research and the funding agencies; electronic libraries; document delivery; multimedia document delivery; the Z39.50 standard for client server computer architecture, the Internet and WWW; electronic publishing; information retrieval; evaluation and evaluation techniques; the Text Retrieval Conferences (TREC); the user domain; management issues; decision support systems; information politics and organizational culture; and value for money issues
  15. Hofstede, M.: Literatuur over onderwerpen zoeken in de OPC (1994) 0.00
    0.004673052 = product of:
      0.028038312 = sum of:
        0.028038312 = product of:
          0.08411493 = sum of:
            0.08411493 = weight(_text_:29 in 5400) [ClassicSimilarity], result of:
              0.08411493 = score(doc=5400,freq=2.0), product of:
                0.13526669 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038453303 = queryNorm
                0.6218451 = fieldWeight in 5400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.125 = fieldNorm(doc=5400)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Source
    CRI bulletin. 29(1994), Sept., S.14-15
  16. Wolff, C.: Leistungsvergleich der Retrievaloberflächen zwischen Web und klassischen Expertensystemen (2001) 0.00
    0.0043200497 = product of:
      0.025920296 = sum of:
        0.025920296 = weight(_text_:internet in 5870) [ClassicSimilarity], result of:
          0.025920296 = score(doc=5870,freq=2.0), product of:
            0.11352337 = queryWeight, product of:
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.038453303 = queryNorm
            0.22832564 = fieldWeight in 5870, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5870)
      0.16666667 = coord(1/6)
    
    Theme
    Internet
  17. Munoz, A.M.; Munoz, F.A.: Nuevas areas de conocimiento y la problematica documental : la prospectiva de la paz en la Universidad de Granada (1997) 0.00
    0.0043200497 = product of:
      0.025920296 = sum of:
        0.025920296 = weight(_text_:internet in 340) [ClassicSimilarity], result of:
          0.025920296 = score(doc=340,freq=2.0), product of:
            0.11352337 = queryWeight, product of:
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.038453303 = queryNorm
            0.22832564 = fieldWeight in 340, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.0546875 = fieldNorm(doc=340)
      0.16666667 = coord(1/6)
    
    Abstract
    Report of a study from the user's point of view, investigating the facility with which bibliographical material can be identified in a multidisciplinary field, the prospective for peace, from the University's resources. Searches (uniterm and relational) were effected using all available tools - OPACs, CD-ROM collections, online databases, manual catalogues, the Internet - both on the University's system and on national research institutions. Overall results returned a low rate of pertinence (1,86%). This is due not to lack of user search expertise but the lack of subject specific indexing coupled with using a MARC format
  18. Vechtomova, O.: Facet-based opinion retrieval from blogs (2010) 0.00
    0.0043200497 = product of:
      0.025920296 = sum of:
        0.025920296 = weight(_text_:internet in 4225) [ClassicSimilarity], result of:
          0.025920296 = score(doc=4225,freq=2.0), product of:
            0.11352337 = queryWeight, product of:
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.038453303 = queryNorm
            0.22832564 = fieldWeight in 4225, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4225)
      0.16666667 = coord(1/6)
    
    Theme
    Internet
  19. Hancock-Beaulieu, M.; McKenzie, L.; Irving, A.: Evaluative protocols for searching behaviour in online library catalogues (1991) 0.00
    0.0040889205 = product of:
      0.024533523 = sum of:
        0.024533523 = product of:
          0.07360057 = sum of:
            0.07360057 = weight(_text_:29 in 347) [ClassicSimilarity], result of:
              0.07360057 = score(doc=347,freq=2.0), product of:
                0.13526669 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038453303 = queryNorm
                0.5441145 = fieldWeight in 347, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=347)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    23. 1.1999 19:52:29
  20. Harman, D.K.: ¬The TREC test collections (2005) 0.00
    0.0040889205 = product of:
      0.024533523 = sum of:
        0.024533523 = product of:
          0.07360057 = sum of:
            0.07360057 = weight(_text_:29 in 4637) [ClassicSimilarity], result of:
              0.07360057 = score(doc=4637,freq=2.0), product of:
                0.13526669 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038453303 = queryNorm
                0.5441145 = fieldWeight in 4637, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4637)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    29. 3.1996 18:16:49

Years

Languages

  • e 86
  • d 11
  • f 1
  • fi 1
  • m 1
  • nl 1
  • sp 1
  • More… Less…

Types

  • a 94
  • s 6
  • m 4
  • r 2
  • el 1
  • More… Less…