Search (38 results, page 1 of 2)

  • × theme_ss:"Suchtaktik"
  • × year_i:[2010 TO 2020}
  1. Waschatz, B.: Schmökern ist schwierig : Viele Uni-Bibliotheken ordnen ihre Bücher nicht - Tipps für eine erfolgreiche Suche (2010) 0.03
    0.03186889 = product of:
      0.06373778 = sum of:
        0.03650633 = weight(_text_:im in 3206) [ClassicSimilarity], result of:
          0.03650633 = score(doc=3206,freq=16.0), product of:
            0.1377539 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.048731633 = queryNorm
            0.26501122 = fieldWeight in 3206, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3206)
        0.027231453 = product of:
          0.04084718 = sum of:
            0.02103979 = weight(_text_:online in 3206) [ClassicSimilarity], result of:
              0.02103979 = score(doc=3206,freq=4.0), product of:
                0.1478957 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.048731633 = queryNorm
                0.142261 = fieldWeight in 3206, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3206)
            0.019807389 = weight(_text_:22 in 3206) [ClassicSimilarity], result of:
              0.019807389 = score(doc=3206,freq=2.0), product of:
                0.17064987 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048731633 = queryNorm
                0.116070345 = fieldWeight in 3206, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3206)
          0.6666667 = coord(2/3)
      0.5 = coord(2/4)
    
    Content
    "In einer öffentlichen Bücherei ist die Suche nach einem Werk recht einfach: Man geht einfach die Regale ab, bis man beim richtigen Buchstaben oder Thema angekommen ist. In vielen wissenschaftlichen Bibliotheken ist das komplizierter. Denn dort müssen sich Studenten durch Datenbanken und Zettelkataloge wühlen. "Eine Ausnahme ist der Lesesaal, erklärt Marlene Grau, Sprecherin der Staats- und Universitätsbibliothek in Hamburg. Im Lesesaal stehen die Bücher wie in einer öffentlichen Bibliothek in Reih und Glied nach Fachgebieten wie Jura, Biologie oder Medizin sortiert. So können Studenten ein wenig schmökern und querbeet lesen. Wer jedoch ein bestimmtes Werk sucht, nutzt besser gleich den Katalog der Bibliothek. Darin lässt sich zum einen nach dem Autor oder einem Titelstichwort suchen - in der Biologie etwa "Fliege" oder "Insekt". "Dann kann man hoffen, dass Bücher zum Thema das Stichwort im Titel enthalten", sagt Grau. Die andere Variante ist, nach einem Schlagwort zu suchen. Um das passende zu finden, kann man im Schlagwort-Index blättern. Oder man sucht nach einem bekannten Buch, das zum Thema passt. Dann kann man mit dessen Schlagwörtern weitersuchen. Der Vorteil: Bücher müssen dieses Schlagwort nicht im Titel enthalten. Buchtitel wie 'Keine Angst vor Zahlen' oder 'Grundkurs Rechnen' findet man über die Schlagworte 'Mathematik' und 'Einführung', aber mit Stichworten eher nicht", erklärt Ulrich Hohoff. Er leitet die Universitätsbibliothek in Augsburg.
    Im Online-Katalog erfahren Studenten auch, ob das Buch verfügbar oder verliehen ist. Ist es gerade vergriffen, kann man es vormerken lassen, er- klärt Monika Ziller, Vorsitzen- de des Deutschen Bibliotheksverbands in Berlin. Dann werden die Studenten entsprechend benachrichtigt, wenn es zurückgegeben wurde. Außerdem könnten Studenten virtuelle Fachbibliotheken nutzen, erklärt Grau. Um das Thema Slavistik kümmert sich etwa die Staatsbibliothek in Berlin. Auf der Internetseite kann man über Suchbegriffe alle elektronischen Slavistik-Angebote wie Zeitschriften, E-Books oder Bibliografien durchforsten. Die virtuelle Fachbibliothek spuckt dann eine Titelliste aus. Bestenfalls können Studenten gleich auf einzelne Volltexte der Liste zugreifen. Oder sie müssen schauen, ob die eigene Bibliothek das gesuchte Werk hat. Vor allem Zeitschriften sind oft online im Volltext abrufbar, aber auch Enzyklopädien. "Die sind auch aktueller als der Brockhaus von 1990, der zu Hause im Regal steht" sagt Grau. Manchmal ließen sich die Texte aus Gründen des Urheberrechts aber nur auf den Rechnern auf dem Unicampus lesen, ergänzt Hohoff. Findet man ein Buch nicht, ist der Grund dafür oft ein Fehler, der sich bei der Suche eingeschlichen hat. Das fängt bei der Rechtschreibung an: "Bibliothekskataloge verfügen über keine fehlertolerante Suche wie Google", erklärt Ziller.
    "Ein häufiger Fehler ist auch, bei Google nach Büchern zu suchen", sagt Grau. Die Suchmaschine enthält keine Bibliotheksdaten. Außerdem sollten Studenten darauf achten, ob sie nach einem Zeitschriften-Artikel oder einer Monografie suchen. Benötigt man einen Aufsatz, muss man nach dem Titel der Zeitschrift und nicht nach dem Titel des Artikels suchen. Wichtig ist auch, den Suchschlüssel zu beachten. Wer nach dem Autor Johann Wolfgang von Goethe sucht, aber das Wort in der Titelsuche eingibt, bekommt andere Treffermengen. Studenten sollten die Suche auch nicht zu sehr eingrenzen. "Dann findet man nichts", warnt Grau. Andererseits darf man auch nicht zu allgemein suchen. Wer nach einem Buch zur deutschen Geschichte sucht, bekommt bei der Eingabe von "deutsche Geschichte" Tausende Treffer. "Da muss man den richtigen Suchschlüssel auswählen", erklärt Grau. Wer im Feld "Titelanfänge" etwa "deutsche Geschichte" eingibt, finde alle Titel mit diesen Wörtern in genau dieser Reihenfolge. Er lande also nicht beim Buch "Deutsche Naturlyrik: ihre Geschichte in Einzelanalysen". Das ist bei weit gefassten Begriffen sehr wichtig und hilfreich."
    Date
    3. 5.1997 8:44:22
  2. Carstens, C.: Ontology based query expansion : retrieval support for the domain of educational research (2012) 0.02
    0.020805728 = product of:
      0.041611455 = sum of:
        0.030116186 = weight(_text_:im in 4655) [ClassicSimilarity], result of:
          0.030116186 = score(doc=4655,freq=2.0), product of:
            0.1377539 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.048731633 = queryNorm
            0.2186231 = fieldWeight in 4655, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4655)
        0.01149527 = product of:
          0.03448581 = sum of:
            0.03448581 = weight(_text_:retrieval in 4655) [ClassicSimilarity], result of:
              0.03448581 = score(doc=4655,freq=2.0), product of:
                0.14740905 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.048731633 = queryNorm
                0.23394634 = fieldWeight in 4655, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4655)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Diese Arbeit untersucht, wie sich eine Forschungskontext-Ontologie als Quelle für die Generierung von Query-Expansion-Termen in einem Retrievalsystem für die Domäne der Erziehungswissenschaft nutzen lässt. Durch die Kombination traditioneller, groß angelegter automatischer Retrievalexperimente und nutzerzentrierter interaktiver Retievalexperimente wird ein umfassendes Bild der Effekte ontologiebasierter Query Expansion gezeichnet. Während die automatischen Experimente die Expansionseffekte einzelner Arten ontologiebasierter Expansionsterme im Detail untersuchen, beleuchten die interaktiven Experimente, wie ontologiebasierte Expansionsmechanismen das Suchverhalten der Nutzer sowie ihren Sucherfolg beeinflussen.
  3. Sanfilippo, M.; Yang, S.; Fichman, P.: Trolling here, there, and everywhere : perceptions of trolling behaviors in context (2017) 0.01
    0.011939808 = product of:
      0.04775923 = sum of:
        0.04775923 = product of:
          0.071638845 = sum of:
            0.04207958 = weight(_text_:online in 3823) [ClassicSimilarity], result of:
              0.04207958 = score(doc=3823,freq=4.0), product of:
                0.1478957 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.048731633 = queryNorm
                0.284522 = fieldWeight in 3823, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3823)
            0.029559264 = weight(_text_:retrieval in 3823) [ClassicSimilarity], result of:
              0.029559264 = score(doc=3823,freq=2.0), product of:
                0.14740905 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.048731633 = queryNorm
                0.20052543 = fieldWeight in 3823, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3823)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    Online trolling has become increasingly prevalent and visible in online communities. Perceptions of and reactions to trolling behaviors varies significantly from one community to another, as trolling behaviors are contextual and vary across platforms and communities. Through an examination of seven trolling scenarios, this article intends to answer the following questions: how do trolling behaviors differ across contexts; how do perceptions of trolling differ from case to case; and what aspects of context of trolling are perceived to be important by the public? Based on focus groups and interview data, we discuss the ways in which community norms and demographics, technological features of platforms, and community boundaries are perceived to impact trolling behaviors. Two major contributions of the study include a codebook to support future analysis of trolling and formal concept analysis surrounding contextual perceptions of trolling.
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  4. Saastamoinen, M.; Järvelin, K.: Search task features in work tasks of varying types and complexity (2017) 0.01
    0.011529008 = product of:
      0.04611603 = sum of:
        0.04611603 = product of:
          0.069174044 = sum of:
            0.029559264 = weight(_text_:retrieval in 3589) [ClassicSimilarity], result of:
              0.029559264 = score(doc=3589,freq=2.0), product of:
                0.14740905 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.048731633 = queryNorm
                0.20052543 = fieldWeight in 3589, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3589)
            0.039614778 = weight(_text_:22 in 3589) [ClassicSimilarity], result of:
              0.039614778 = score(doc=3589,freq=2.0), product of:
                0.17064987 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048731633 = queryNorm
                0.23214069 = fieldWeight in 3589, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3589)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    Information searching in practice seldom is an end in itself. In work, work task (WT) performance forms the context, which information searching should serve. Therefore, information retrieval (IR) systems development/evaluation should take the WT context into account. The present paper analyzes how WT features: task complexity and task types, affect information searching in authentic work: the types of information needs, search processes, and search media. We collected data on 22 information professionals in authentic work situations in three organization types: city administration, universities, and companies. The data comprise 286 WTs and 420 search tasks (STs). The data include transaction logs, video recordings, daily questionnaires, interviews. and observation. The data were analyzed quantitatively. Even if the participants used a range of search media, most STs were simple throughout the data, and up to 42% of WTs did not include searching. WT's effects on STs are not straightforward: different WT types react differently to WT complexity. Due to the simplicity of authentic searching, the WT/ST types in interactive IR experiments should be reconsidered.
  5. Rieh, S.Y.; Kim, Y.-M.; Markey, K.: Amount of invested mental effort (AIME) in online searching (2012) 0.01
    0.011263336 = product of:
      0.045053344 = sum of:
        0.045053344 = product of:
          0.067580014 = sum of:
            0.042947292 = weight(_text_:online in 2726) [ClassicSimilarity], result of:
              0.042947292 = score(doc=2726,freq=6.0), product of:
                0.1478957 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.048731633 = queryNorm
                0.29038906 = fieldWeight in 2726, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2726)
            0.02463272 = weight(_text_:retrieval in 2726) [ClassicSimilarity], result of:
              0.02463272 = score(doc=2726,freq=2.0), product of:
                0.14740905 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.048731633 = queryNorm
                0.16710453 = fieldWeight in 2726, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2726)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    This research investigates how people's perceptions of information retrieval (IR) systems, their perceptions of search tasks, and their perceptions of self-efficacy influence the amount of invested mental effort (AIME) they put into using two different IR systems: a Web search engine and a library system. It also explores the impact of mental effort on an end user's search experience. To assess AIME in online searching, two experiments were conducted using these methods: Experiment 1 relied on self-reports and Experiment 2 employed the dual-task technique. In both experiments, data were collected through search transaction logs, a pre-search background questionnaire, a post-search questionnaire and an interview. Important findings are these: (1) subjects invested greater mental effort searching a library system than searching the Web; (2) subjects put little effort into Web searching because of their high sense of self-efficacy in their searching ability and their perception of the easiness of the Web; (3) subjects did not recognize that putting mental effort into searching was something needed to improve the search results; and (4) data collected from multiple sources proved to be effective for assessing mental effort in online searching.
  6. Hopkins, M.E.; Zavalina, O.L.: Evaluating physicians' serendipitous knowledge discovery in online discovery systems : a new approach (2019) 0.01
    0.009634658 = product of:
      0.03853863 = sum of:
        0.03853863 = product of:
          0.057807945 = sum of:
            0.024795627 = weight(_text_:online in 5842) [ClassicSimilarity], result of:
              0.024795627 = score(doc=5842,freq=2.0), product of:
                0.1478957 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.048731633 = queryNorm
                0.16765618 = fieldWeight in 5842, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5842)
            0.033012316 = weight(_text_:22 in 5842) [ClassicSimilarity], result of:
              0.033012316 = score(doc=5842,freq=2.0), product of:
                0.17064987 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048731633 = queryNorm
                0.19345059 = fieldWeight in 5842, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5842)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Date
    20. 1.2015 18:30:22
  7. Mayr, P.; Mutschke, P.; Petras, V.; Schaer, P.; Sure, Y.: Applying science models for search (2010) 0.01
    0.0056886836 = product of:
      0.022754734 = sum of:
        0.022754734 = product of:
          0.0682642 = sum of:
            0.0682642 = weight(_text_:retrieval in 4663) [ClassicSimilarity], result of:
              0.0682642 = score(doc=4663,freq=6.0), product of:
                0.14740905 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.048731633 = queryNorm
                0.46309367 = fieldWeight in 4663, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4663)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    The paper proposes three different kinds of science models as value-added services that are integrated in the retrieval process to enhance retrieval quailty. The paper discusses the approaches Search Term Recommendation, Bradfordizing and Author Centrality on a general level and addresses implementation issues of the models within a real-life retrieval environment.
  8. Yuan, X.; Belkin, N.J.: Evaluating an integrated system supporting multiple information-seeking strategies (2010) 0.00
    0.004926544 = product of:
      0.019706177 = sum of:
        0.019706177 = product of:
          0.059118528 = sum of:
            0.059118528 = weight(_text_:retrieval in 3992) [ClassicSimilarity], result of:
              0.059118528 = score(doc=3992,freq=8.0), product of:
                0.14740905 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.048731633 = queryNorm
                0.40105087 = fieldWeight in 3992, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3992)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Many studies have demonstrated that people engage in a variety of different information behaviors when engaging in information seeking. However, standard information retrieval systems such as Web search engines continue to be designed to support mainly one such behavior, specified searching. This situation has led to suggestions that people would be better served by information retrieval systems which support different kinds of information-seeking strategies. This article reports on an experiment comparing the retrieval effectiveness of an integrated interactive information retrieval (IIR) system which adapts to support different information-seeking strategies with that of a standard baseline IIR system. The experiment, with 32 participants each searching on eight different topics, indicates that using the integrated IIR system resulted in significantly better user satisfaction with search results, significantly more effective interaction, and significantly better usability than that using the baseline system.
  9. Habernal, I.; Konopík, M.; Rohlík, O.: Question answering (2012) 0.00
    0.0042665126 = product of:
      0.01706605 = sum of:
        0.01706605 = product of:
          0.05119815 = sum of:
            0.05119815 = weight(_text_:retrieval in 101) [ClassicSimilarity], result of:
              0.05119815 = score(doc=101,freq=6.0), product of:
                0.14740905 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.048731633 = queryNorm
                0.34732026 = fieldWeight in 101, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=101)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Question Answering is an area of information retrieval with the added challenge of applying sophisticated techniques to identify the complex syntactic and semantic relationships present in text in order to provide a more sophisticated and satisfactory response to the user's information needs. For this reason, the authors see question answering as the next step beyond standard information retrieval. In this chapter state of the art question answering is covered focusing on providing an overview of systems, techniques and approaches that are likely to be employed in the next generations of search engines. Special attention is paid to question answering using the World Wide Web as the data source and to question answering exploiting the possibilities of Semantic Web. Considerations about the current issues and prospects for promising future research are also provided.
    Source
    Next generation search engines: advanced models for information retrieval. Eds.: C. Jouis, u.a
  10. Bergman, O.; Whittaker, S.; Sanderson, M.; Nachmias, R.; Ramamoorthy, A.: ¬The effect of folder structure on personal file navigation (2010) 0.00
    0.0041054534 = product of:
      0.016421814 = sum of:
        0.016421814 = product of:
          0.04926544 = sum of:
            0.04926544 = weight(_text_:retrieval in 4114) [ClassicSimilarity], result of:
              0.04926544 = score(doc=4114,freq=8.0), product of:
                0.14740905 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.048731633 = queryNorm
                0.33420905 = fieldWeight in 4114, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4114)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Folder navigation is the main way that personal computer users retrieve their own files. People dedicate considerable time to creating systematic structures to facilitate such retrieval. Despite the prevalence of both manual organization and navigation, there is very little systematic data about how people actually carry out navigation, or about the relation between organization structure and retrieval parameters. The aims of our research were therefore to study users' folder structure, personal file navigation, and the relations between them. We asked 296 participants to retrieve 1,131 of their active files and analyzed each of the 5,035 navigation steps in these retrievals. Folder structures were found to be shallow (files were retrieved from mean depth of 2.86 folders), with small folders (a mean of 11.82 files per folder) containing many subfolders (M=10.64). Navigation was largely successful and efficient with participants successfully accessing 94% of their files and taking 14.76 seconds to do this on average. Retrieval time and success depended on folder size and depth. We therefore found the users' decision to avoid both deep structure and large folders to be adaptive. Finally, we used a predictive model to formulate the effect of folder depth and folder size on retrieval time, and suggested an optimization point in this trade-off.
  11. Looking for information : a survey on research on information seeking, needs, and behavior (2012) 0.00
    0.0041054534 = product of:
      0.016421814 = sum of:
        0.016421814 = product of:
          0.04926544 = sum of:
            0.04926544 = weight(_text_:retrieval in 3802) [ClassicSimilarity], result of:
              0.04926544 = score(doc=3802,freq=2.0), product of:
                0.14740905 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.048731633 = queryNorm
                0.33420905 = fieldWeight in 3802, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3802)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  12. Tamine, L.; Chouquet, C.: On the impact of domain expertise on query formulation, relevance assessment and retrieval performance in clinical settings (2017) 0.00
    0.003555427 = product of:
      0.014221708 = sum of:
        0.014221708 = product of:
          0.042665124 = sum of:
            0.042665124 = weight(_text_:retrieval in 3290) [ClassicSimilarity], result of:
              0.042665124 = score(doc=3290,freq=6.0), product of:
                0.14740905 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.048731633 = queryNorm
                0.28943354 = fieldWeight in 3290, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3290)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    The large volumes of medical information available on the web may provide answers for a wide range of users attempting to solve health-related problems. While experts generally utilize reliable resources for diagnosis search and professional development, novices utilize different (social) web resources to obtain information that helps them manage their health or the health of people who they care for. A diverse number of related search topics address clinical diagnosis, advice searching, information sharing, connecting with experts, etc. This paper focuses on the extent to which expertise can impact clinical query formulation, document relevance assessment and retrieval performance in the context of tailoring retrieval models and systems to experts vs. non-experts. The results show that medical domain expertise 1) plays an important role in the lexical representations of information needs; 2) significantly influences the perception of relevance even among users with similar levels of expertise and 3) reinforces the idea that a single ground truth does not exist, thereby leading to the variability of system rankings with respect to the level of user's expertise. The findings of this study presents opportunities for the design of personalized health-related IR systems, but also for providing insights about the evaluation of such systems.
  13. Looking for information : a survey on research on information seeking, needs, and behavior (2016) 0.00
    0.003555427 = product of:
      0.014221708 = sum of:
        0.014221708 = product of:
          0.042665124 = sum of:
            0.042665124 = weight(_text_:retrieval in 3803) [ClassicSimilarity], result of:
              0.042665124 = score(doc=3803,freq=6.0), product of:
                0.14740905 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.048731633 = queryNorm
                0.28943354 = fieldWeight in 3803, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3803)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    RSWK
    Information Retrieval
    Subject
    Information Retrieval
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  14. Hoeber, O.: Human-centred Web search (2012) 0.00
    0.0034835925 = product of:
      0.01393437 = sum of:
        0.01393437 = product of:
          0.04180311 = sum of:
            0.04180311 = weight(_text_:retrieval in 102) [ClassicSimilarity], result of:
              0.04180311 = score(doc=102,freq=4.0), product of:
                0.14740905 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.048731633 = queryNorm
                0.2835858 = fieldWeight in 102, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=102)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    People commonly experience difficulties when searching the Web, arising from an incomplete knowledge regarding their information needs, an inability to formulate accurate queries, and a low tolerance for considering the relevance of the search results. While simple and easy to use interfaces have made Web search universally accessible, they provide little assistance for people to overcome the difficulties they experience when their information needs are more complex than simple fact-verification. In human-centred Web search, the purpose of the search engine expands from a simple information retrieval engine to a decision support system. People are empowered to take an active role in the search process, with the search engine supporting them in developing a deeper understanding of their information needs, assisting them in crafting and refining their queries, and aiding them in evaluating and exploring the search results. In this chapter, recent research in this domain is outlined and discussed.
    Source
    Next generation search engines: advanced models for information retrieval. Eds.: C. Jouis, u.a
  15. Rowley, J.; Johnson, F.; Sbaffi, L.: Gender as an influencer of online health information-seeking and evaluation behavior (2017) 0.00
    0.0029221931 = product of:
      0.011688773 = sum of:
        0.011688773 = product of:
          0.035066318 = sum of:
            0.035066318 = weight(_text_:online in 3316) [ClassicSimilarity], result of:
              0.035066318 = score(doc=3316,freq=4.0), product of:
                0.1478957 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.048731633 = queryNorm
                0.23710167 = fieldWeight in 3316, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3316)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    This article contributes to the growing body of research that explores the significance of context in health information behavior. Specifically, through the lens of trust judgments, it demonstrates that gender is a determinant of the information evaluation process. A questionnaire-based survey collected data from adults regarding the factors that influence their judgment of the trustworthiness of online health information. Both men and women identified credibility, recommendation, ease of use, and brand as being of importance in their trust judgments. However, women also take into account style, while men eschew this for familiarity. In addition, men appear to be more concerned with the comprehensiveness and accuracy of the information, the ease with which they can access it, and its familiarity, whereas women demonstrate greater interest in cognition, such as the ease with which they can read and understand the information. These gender differences are consistent with the demographic data, which suggest that: women consult more types of sources than men; men are more likely to be searching with respect to a long-standing health complaint; and, women are more likely than men to use tablets in their health information seeking. Recommendations for further research to better inform practice are offered.
  16. Yuan, X.; Belkin, N.J.: Investigating information retrieval support techniques for different information-seeking strategies (2010) 0.00
    0.0029029937 = product of:
      0.011611975 = sum of:
        0.011611975 = product of:
          0.034835923 = sum of:
            0.034835923 = weight(_text_:retrieval in 3699) [ClassicSimilarity], result of:
              0.034835923 = score(doc=3699,freq=4.0), product of:
                0.14740905 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.048731633 = queryNorm
                0.23632148 = fieldWeight in 3699, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3699)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    We report on a study that investigated the efficacy of four different interactive information retrieval (IIR) systems, each designed to support a specific information-seeking strategy (ISS). These systems were constructed using different combinations of IR techniques (i.e., combinations of different methods of representation, comparison, presentation and navigation), each of which was hypothesized to be well suited to support a specific ISS. We compared the performance of searchers in each such system, designated experimental, to an appropriate baseline system, which implemented the standard specified query and results list model of current state-of-the-art experimental and operational IR systems. Four within-subjects experiments were conducted for the purpose of this comparison. Results showed that each of the experimental systems was superior to its baseline system in supporting user performance for the specific ISS (that is, the information problem leading to that ISS) for which the system was designed. These results indicate that an IIR system, which intends to support more than one kind of ISS, should be designed within a framework which allows the use and combination of different IR support techniques for different ISSs.
  17. Cole, C.: ¬A theory of information need for information retrieval that connects information to knowledge (2011) 0.00
    0.0029029937 = product of:
      0.011611975 = sum of:
        0.011611975 = product of:
          0.034835923 = sum of:
            0.034835923 = weight(_text_:retrieval in 4474) [ClassicSimilarity], result of:
              0.034835923 = score(doc=4474,freq=4.0), product of:
                0.14740905 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.048731633 = queryNorm
                0.23632148 = fieldWeight in 4474, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4474)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    This article proposes a theory of information need for information retrieval (IR). Information need traditionally denotes the start state for someone seeking information, which includes information search using an IR system. There are two perspectives on information need. The dominant, computer science perspective is that the user needs to find an answer to a well-defined question which is easy for the user to formulate into a query to the system. Ironically, information science's best known model of information need (Taylor, 1968) deems it to be a "black box"-unknowable and nonspecifiable by the user in a query to the information system. Information science has instead devoted itself to studying eight adjacent or surrogate concepts (information seeking, search and use; problem, problematic situation and task; sense making and evolutionary adaptation/information foraging). Based on an analysis of these eight adjacent/surrogate concepts, we create six testable propositions for a theory of information need. The central assumption of the theory is that while computer science sees IR as an information- or answer-finding system, focused on the user finding an answer, an information science or user-oriented theory of information need envisages a knowledge formulation/acquisition system.
  18. Kaptein, R.; Kamps, J.: Explicit extraction of topical context (2011) 0.00
    0.0029029937 = product of:
      0.011611975 = sum of:
        0.011611975 = product of:
          0.034835923 = sum of:
            0.034835923 = weight(_text_:retrieval in 4630) [ClassicSimilarity], result of:
              0.034835923 = score(doc=4630,freq=4.0), product of:
                0.14740905 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.048731633 = queryNorm
                0.23632148 = fieldWeight in 4630, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4630)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    This article studies one of the main bottlenecks in providing more effective information access: the poverty on the query end. We explore whether users can classify keyword queries into categories from the DMOZ directory on different levels and whether this topical context can help retrieval performance. We have conducted a user study to let participants classify queries into DMOZ categories, either by freely searching the directory or by selection from a list of suggestions. Results of the study show that DMOZ categories are suitable for topic categorization. Both free search and list selection can be used to elicit topical context. Free search leads to more specific categories than the list selections. Participants in our study show moderate agreement on the categories they select, but broad agreement on the higher levels of chosen categories. The free search categories significantly improve retrieval effectiveness. The more general list selection categories and the top-level categories do not lead to significant improvements. Combining topical context with blind relevance feedback leads to better results than applying either of them separately. We conclude that DMOZ is a suitable resource for interacting with users on topical categories applicable to their query, and can lead to better search results.
  19. Vakkari, P.; Huuskonen, S.: Search effort degrades search output but improves task outcome (2012) 0.00
    0.0029029937 = product of:
      0.011611975 = sum of:
        0.011611975 = product of:
          0.034835923 = sum of:
            0.034835923 = weight(_text_:retrieval in 46) [ClassicSimilarity], result of:
              0.034835923 = score(doc=46,freq=4.0), product of:
                0.14740905 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.048731633 = queryNorm
                0.23632148 = fieldWeight in 46, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=46)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    We analyzed how effort in searching is associated with search output and task outcome. In a field study, we examined how students' search effort for an assigned learning task was associated with precision and relative recall, and how this was associated to the quality of learning outcome. The study subjects were 41 medical students writing essays for a class in medicine. Searching in Medline was part of their assignment. The data comprised students' search logs in Medline, their assessment of the usefulness of references retrieved, a questionnaire concerning the search process, and evaluation scores of the essays given by the teachers. Pearson correlation was calculated for answering the research questions. Finally, a path model for predicting task outcome was built. We found that effort in the search process degraded precision but improved task outcome. There were two major mechanisms reducing precision while enhancing task outcome. Effort in expanding Medical Subject Heading (MeSH) terms within search sessions and effort in assessing and exploring documents in the result list between the sessions degraded precision, but led to better task outcome. Thus, human effort compensated bad retrieval results on the way to good task outcome. Findings suggest that traditional effectiveness measures in information retrieval should be complemented with evaluation measures for search process and outcome.
  20. Barsky, E.; Bar-Ilan, J.: ¬The impact of task phrasing on the choice of search keywords and on the search process and success (2012) 0.00
    0.0029029937 = product of:
      0.011611975 = sum of:
        0.011611975 = product of:
          0.034835923 = sum of:
            0.034835923 = weight(_text_:retrieval in 455) [ClassicSimilarity], result of:
              0.034835923 = score(doc=455,freq=4.0), product of:
                0.14740905 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.048731633 = queryNorm
                0.23632148 = fieldWeight in 455, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=455)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    This experiment studied the impact of various task phrasings on the search process. Eighty-eight searchers performed four web search tasks prescribed by the researchers. Each task was linked to an existing target web page, containing a piece of text that served as the basis for the task. A matching phrasing was a task whose wording matched the text of the target page. A nonmatching phrasing was synonymous with the matching phrasing, but had no match with the target page. Searchers received tasks for both types in English and in Hebrew. The search process was logged. The findings confirm that task phrasing shapes the search process and outcome, and also user satisfaction. Each search stage-retrieval of the target page, visiting the target page, and finding the target answer-was associated with different phenomena; for example, target page retrieval was negatively affected by persistence in search patterns (e.g., use of phrases), user-originated keywords, shorter queries, and omitting key keywords from the queries. Searchers were easily driven away from the top-ranked target pages by lower-ranked pages with title tags matching the queries. Some searchers created consistently longer queries than other searchers, regardless of the task length. Several consistent behavior patterns that characterized the Hebrew language were uncovered, including the use of keyword modifications (replacing infinitive forms with nouns), omitting prefixes and articles, and preferences for the common language. The success self-assessment also depended on whether the wording of the answer matched the task phrasing.

Languages

  • e 36
  • d 2
  • More… Less…

Types

  • a 35
  • m 3
  • s 1
  • More… Less…