Search (211 results, page 1 of 11)

  • × theme_ss:"Semantisches Umfeld in Indexierung u. Retrieval"
  1. Marx, E. et al.: Exploring term networks for semantic search over RDF knowledge graphs (2016) 0.08
    0.07674606 = product of:
      0.15349212 = sum of:
        0.014975886 = weight(_text_:information in 3279) [ClassicSimilarity], result of:
          0.014975886 = score(doc=3279,freq=2.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.19395474 = fieldWeight in 3279, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=3279)
        0.10871997 = weight(_text_:networks in 3279) [ClassicSimilarity], result of:
          0.10871997 = score(doc=3279,freq=2.0), product of:
            0.20804176 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.043984205 = queryNorm
            0.52258724 = fieldWeight in 3279, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.078125 = fieldNorm(doc=3279)
        0.029796265 = product of:
          0.05959253 = sum of:
            0.05959253 = weight(_text_:22 in 3279) [ClassicSimilarity], result of:
              0.05959253 = score(doc=3279,freq=2.0), product of:
                0.1540252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043984205 = queryNorm
                0.38690117 = fieldWeight in 3279, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3279)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Series
    Communications in computer and information science; 672
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  2. Kulyukin, V.A.; Settle, A.: Ranked retrieval with semantic networks and vector spaces (2001) 0.06
    0.061977558 = product of:
      0.18593267 = sum of:
        0.011980709 = weight(_text_:information in 6934) [ClassicSimilarity], result of:
          0.011980709 = score(doc=6934,freq=2.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.1551638 = fieldWeight in 6934, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=6934)
        0.17395195 = weight(_text_:networks in 6934) [ClassicSimilarity], result of:
          0.17395195 = score(doc=6934,freq=8.0), product of:
            0.20804176 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.043984205 = queryNorm
            0.8361396 = fieldWeight in 6934, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.0625 = fieldNorm(doc=6934)
      0.33333334 = coord(2/6)
    
    Abstract
    The equivalence of semantic networks with spreading activation and vector spaces with dot product is investigated under ranked retrieval. Semantic networks are viewed as networks of concepts organized in terms of abstraction and packaging relations. It is shown that the two models can be effectively constructed from each other. A formal method is suggested to analyze the models in terms of their relative performance in the same universe of objects
    Source
    Journal of the American Society for Information Science and technology. 52(2001) no.14, S.1224-1233
  3. Chang, C.-H.; Hsu, C.-C.: Integrating query expansion and conceptual relevance feedback for personalized Web information retrieval (1998) 0.06
    0.0589638 = product of:
      0.1179276 = sum of:
        0.020966241 = weight(_text_:information in 1319) [ClassicSimilarity], result of:
          0.020966241 = score(doc=1319,freq=8.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.27153665 = fieldWeight in 1319, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1319)
        0.07610398 = weight(_text_:networks in 1319) [ClassicSimilarity], result of:
          0.07610398 = score(doc=1319,freq=2.0), product of:
            0.20804176 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.043984205 = queryNorm
            0.36581108 = fieldWeight in 1319, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1319)
        0.020857384 = product of:
          0.04171477 = sum of:
            0.04171477 = weight(_text_:22 in 1319) [ClassicSimilarity], result of:
              0.04171477 = score(doc=1319,freq=2.0), product of:
                0.1540252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043984205 = queryNorm
                0.2708308 = fieldWeight in 1319, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1319)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    Keyword based querying has been an immediate and efficient way to specify and retrieve related information that the user inquired. However, conventional document ranking based on an automatic assessment of document relevance to the query may not be the best approach when little information is given. Proposes an idea to integrate 2 existing techniques, query expansion and relevance feedback to achieve a concept-based information search for the Web
    Date
    1. 8.1996 22:08:06
    Source
    Computer networks and ISDN systems. 30(1998) nos.1/7, S.621-623
  4. Ingwersen, P.; Järvelin, K.: ¬The turn : integration of information seeking and retrieval in context (2005) 0.04
    0.03690674 = product of:
      0.07381348 = sum of:
        0.017157031 = weight(_text_:information in 1323) [ClassicSimilarity], result of:
          0.017157031 = score(doc=1323,freq=42.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.22220306 = fieldWeight in 1323, product of:
              6.4807405 = tf(freq=42.0), with freq of:
                42.0 = termFreq=42.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1323)
        0.0382371 = weight(_text_:united in 1323) [ClassicSimilarity], result of:
          0.0382371 = score(doc=1323,freq=2.0), product of:
            0.24675635 = queryWeight, product of:
              5.6101127 = idf(docFreq=439, maxDocs=44218)
              0.043984205 = queryNorm
            0.15495893 = fieldWeight in 1323, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6101127 = idf(docFreq=439, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1323)
        0.018419355 = product of:
          0.03683871 = sum of:
            0.03683871 = weight(_text_:states in 1323) [ClassicSimilarity], result of:
              0.03683871 = score(doc=1323,freq=2.0), product of:
                0.24220218 = queryWeight, product of:
                  5.506572 = idf(docFreq=487, maxDocs=44218)
                  0.043984205 = queryNorm
                0.152099 = fieldWeight in 1323, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.506572 = idf(docFreq=487, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1323)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    The Turn analyzes the research of information seeking and retrieval (IS&R) and proposes a new direction of integrating research in these two areas: the fields should turn off their separate and narrow paths and construct a new avenue of research. An essential direction for this avenue is context as given in the subtitle Integration of Information Seeking and Retrieval in Context. Other essential themes in the book include: IS&R research models, frameworks and theories; search and works tasks and situations in context; interaction between humans and machines; information acquisition, relevance and information use; research design and methodology based on a structured set of explicit variables - all set into the holistic cognitive approach. The present monograph invites the reader into a construction project - there is much research to do for a contextual understanding of IS&R. The Turn represents a wide-ranging perspective of IS&R by providing a novel unique research framework, covering both individual and social aspects of information behavior, including the generation, searching, retrieval and use of information. Regarding traditional laboratory information retrieval research, the monograph proposes the extension of research toward actors, search and work tasks, IR interaction and utility of information. Regarding traditional information seeking research, it proposes the extension toward information access technology and work task contexts. The Turn is the first synthesis of research in the broad area of IS&R ranging from systems oriented laboratory IR research to social science oriented information seeking studies. TOC:Introduction.- The Cognitive Framework for Information.- The Development of Information Seeking Research.- Systems-Oriented Information Retrieval.- Cognitive and User-Oriented Information Retrieval.- The Integrated IS&R Research Framework.- Implications of the Cognitive Framework for IS&R.- Towards a Research Program.- Conclusion.- Definitions.- References.- Index.
    Footnote
    Rez. in: Mitt. VÖB 59(2006) H.2, S.81-83 (O. Oberhauser): "Mit diesem Band haben zwei herausragende Vertreter der europäischen Informationswissenschaft, die Professoren Peter Ingwersen (Kopenhagen) und Kalervo Järvelin (Tampere) ein Werk vorgelegt, das man vielleicht dereinst als ihr opus magnum bezeichnen wird. Mich würde dies nicht überraschen, denn die Autoren unternehmen hier den ambitionierten Versuch, zwei informations wissenschaftliche Forschungstraditionen, die einander bisher in eher geringem Ausmass begegneten, unter einem gesamtheitlichen kognitiven Ansatz zu vereinen - das primär im sozialwissenschaftlichen Bereich verankerte Forschungsgebiet "Information Seeking and Retrieval" (IS&R) und das vorwiegend im Informatikbereich angesiedelte "Information Retrieval" (IR). Dabei geht es ihnen auch darum, den seit etlichen Jahren zwar dominierenden, aber auch als zu individualistisch kritisierten kognitiven Ansatz so zu erweitern, dass technologische, verhaltensbezogene und kooperative Aspekte in kohärenter Weise berücksichtigt werden. Dies geschieht auf folgende Weise in neun Kapiteln: - Zunächst werden die beiden "Lager" - die an Systemen und Laborexperimenten orientierte IR-Tradition und die an Benutzerfragen orientierte IS&R-Fraktion - einander gegenübergestellt und einige zentrale Begriffe geklärt. - Im zweiten Kapitel erfolgt eine ausführliche Darstellung der kognitiven Richtung der Informationswissenschaft, insbesondere hinsichtlich des Informationsbegriffes. - Daran schliesst sich ein Überblick über die bisherige Forschung zu "Information Seeking" (IS) - eine äusserst brauchbare Einführung in die Forschungsfragen und Modelle, die Forschungsmethodik sowie die in diesem Bereich offenen Fragen, z.B. die aufgrund der einseitigen Ausrichtung des Blickwinkels auf den Benutzer mangelnde Betrachtung der Benutzer-System-Interaktion. - In analoger Weise wird im vierten Kapitel die systemorientierte IRForschung in einem konzentrierten Überblick vorgestellt, in dem es sowohl um das "Labormodell" als auch Ansätze wie die Verarbeitung natürlicher Sprache und Expertensysteme geht. Aspekte wie Relevanz, Anfragemodifikation und Performanzmessung werden ebenso angesprochen wie die Methodik - von den ersten Laborexperimenten bis zu TREC und darüber hinaus.
    - Kapitel fünf enthält einen entsprechenden Überblick über die kognitive und benutzerorientierte IR-Tradition. Es zeigt, welche anderen (als nur die labororientierten) IR-Studien durchgeführt werden können, wobei sich die Betrachtung von frühen Modellen (z.B. Taylor) über Belkins ASK-Konzept bis zu Ingwersens Modell der Polyrepräsentation, und von Bates Berrypicking-Ansatz bis zu Vakkaris "taskbased" IR-Modell erstreckt. Auch Web-IR, OKAPI und Diskussionen zum Relevanzbegriff werden hier thematisiert. - Im folgenden Kapitel schlagen die Autoren ein integriertes IS&R Forschungsmodell vor, bei dem die vielfältigen Beziehungen zwischen Informationssuchenden, Systementwicklern, Oberflächen und anderen beteiligten Aspekten berücksichtigt werden. Ihr Ansatz vereint die traditionelle Laborforschung mit verschiedenen benutzerorientierten Traditionen aus IS&R, insbesondere mit den empirischen Ansätzen zu IS und zum interaktiven IR, in einem holistischen kognitiven Modell. - Kapitel sieben untersucht die Implikationen dieses Modells für IS&R, wobei besonders ins Auge fällt, wie komplex die Anfragen von Informationssuchenden im Vergleich mit der relativen Einfachheit der Algorithmen zum Auffinden relevanter Dokumente sind. Die Abbildung der vielfältig variierenden kognitiven Zustände der Anfragesteller im Rahmen der der Systementwicklung ist sicherlich keine triviale Aufgabe. Wie dabei das Problem der Einbeziehung des zentralen Aspektes der Bedeutung gelöst werden kann, sei dahingestellt. - Im achten Kapitel wird der Versuch unternommen, die zuvor diskutierten Punkte in ein IS&R-Forschungsprogramm (Prozesse - Verhalten - Systemfunktionalität - Performanz) umzusetzen, wobei auch einige kritische Anmerkungen zur bisherigen Forschungspraxis getroffen werden. - Das abschliessende neunte Kapitel fasst das Buch kurz zusammen und kann somit auch als Einstieg in dieThematik gelesen werden. Darauffolgen noch ein sehr nützliches Glossar zu allen wichtigen Begriffen, die in dem Buch Verwendung finden, eine Bibliographie und ein Sachregister. Ingwersen und Järvelin haben hier ein sehr anspruchsvolles und dennoch lesbares Buch vorgelegt. Die gebotenen Übersichtskapitel und Diskussionen sind zwar keine Einführung in die Informationswissenschaft, decken aber einen grossen Teil der heute in dieser Disziplin aktuellen und durch laufende Forschungsaktivitäten und Publikationen berührten Teilbereiche ab. Man könnte es auch - vielleicht ein wenig überspitzt - so formulieren: Was hier thematisiert wird, ist eigentlich die moderne Informationswissenschaft. Der Versuch, die beiden Forschungstraditionen zu vereinen, wird diesem Werk sicherlich einen Platz in der Geschichte der Disziplin sichern. Nicht ganz glücklich erscheint der Titel des Buches. "The Turn" soll eine Wende bedeuten, nämlich jene hin zu einer integrierten Sicht von IS und IR. Das geht vermutlich aus dem Untertitel besser hervor, doch dieser erschien den Autoren wohl zu trocken. Schade, denn "The Turn" gibt es z.B. in unserem Verbundkatalog bereits, allerdings mit dem Zusatz "from the Cold War to a new era; the United States and the Soviet Union 1983-1990". Der Verlag, der abgesehen davon ein gediegenes (wenn auch nicht gerade wohlfeiles) Produkt vorgelegt hat, hätte derlei unscharfe Duplizierend besser verhindert. Ungeachtet dessen empfehle ich dieses wichtige Buch ohne Vorbehalt zur Anschaffung; es sollte in keiner grösseren Bibliothek fehlen."
    Series
    The Kluwer international series on information retrieval ; 18
    Theme
    Information
  5. Goslin, K.; Hofmann, M.: ¬A Wikipedia powered state-based approach to automatic search query enhancement (2018) 0.03
    0.02975843 = product of:
      0.089275286 = sum of:
        0.012707461 = weight(_text_:information in 5083) [ClassicSimilarity], result of:
          0.012707461 = score(doc=5083,freq=4.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.16457605 = fieldWeight in 5083, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5083)
        0.07656782 = product of:
          0.15313564 = sum of:
            0.15313564 = weight(_text_:states in 5083) [ClassicSimilarity], result of:
              0.15313564 = score(doc=5083,freq=6.0), product of:
                0.24220218 = queryWeight, product of:
                  5.506572 = idf(docFreq=487, maxDocs=44218)
                  0.043984205 = queryNorm
                0.63226366 = fieldWeight in 5083, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.506572 = idf(docFreq=487, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5083)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper describes the development and testing of a novel Automatic Search Query Enhancement (ASQE) algorithm, the Wikipedia N Sub-state Algorithm (WNSSA), which utilises Wikipedia as the sole data source for prior knowledge. This algorithm is built upon the concept of iterative states and sub-states, harnessing the power of Wikipedia's data set and link information to identify and utilise reoccurring terms to aid term selection and weighting during enhancement. This algorithm is designed to prevent query drift by making callbacks to the user's original search intent by persisting the original query between internal states with additional selected enhancement terms. The developed algorithm has shown to improve both short and long queries by providing a better understanding of the query and available data. The proposed algorithm was compared against five existing ASQE algorithms that utilise Wikipedia as the sole data source, showing an average Mean Average Precision (MAP) improvement of 0.273 over the tested existing ASQE algorithms.
    Source
    Information processing and management. 54(2018) no.4, S.726-739
  6. Fowler, R.H.; Wilson, B.A.; Fowler, W.A.L.: Information navigator : an information system using associative networks for display and retrieval (1992) 0.03
    0.029668488 = product of:
      0.08900546 = sum of:
        0.023773482 = weight(_text_:information in 919) [ClassicSimilarity], result of:
          0.023773482 = score(doc=919,freq=14.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.3078936 = fieldWeight in 919, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=919)
        0.06523198 = weight(_text_:networks in 919) [ClassicSimilarity], result of:
          0.06523198 = score(doc=919,freq=2.0), product of:
            0.20804176 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.043984205 = queryNorm
            0.31355235 = fieldWeight in 919, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.046875 = fieldNorm(doc=919)
      0.33333334 = coord(2/6)
    
    Abstract
    Document retrieval is a highly interactive process dealing with large amounts of information. Visual representations can provide both a means for managing the complexity of large information structures and an interface style well suited to interactive manipulation. The system we have designed utilizes visually displayed graphic structures and a direct manipulation interface style to supply an integrated environment for retrieval. A common visually displayed network structure is used for query, document content, and term relations. A query can be modified through direct manipulation of its visual form by incorporating terms from any other information structure the system displays. An associative thesaurus of terms and an inter-document network provide information about a document collection that can complement other retrieval aids. Visualization of these large data structures makes use of fisheye views and overview diagrams to help overcome some of the inherent difficulties of orientation and navigation in large information structures.
  7. Lin, J.; DiCuccio, M.; Grigoryan, V.; Wilbur, W.J.: Navigating information spaces : a case study of related article search in PubMed (2008) 0.03
    0.028441414 = product of:
      0.08532424 = sum of:
        0.02009226 = weight(_text_:information in 2124) [ClassicSimilarity], result of:
          0.02009226 = score(doc=2124,freq=10.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.2602176 = fieldWeight in 2124, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2124)
        0.06523198 = weight(_text_:networks in 2124) [ClassicSimilarity], result of:
          0.06523198 = score(doc=2124,freq=2.0), product of:
            0.20804176 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.043984205 = queryNorm
            0.31355235 = fieldWeight in 2124, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.046875 = fieldNorm(doc=2124)
      0.33333334 = coord(2/6)
    
    Abstract
    The concept of an "information space" provides a powerful metaphor for guiding the design of interactive retrieval systems. We present a case study of related article search, a browsing tool designed to help users navigate the information space defined by results of the PubMed® search engine. This feature leverages content-similarity links that tie MEDLINE® citations together in a vast document network. We examine the effectiveness of related article search from two perspectives: a topological analysis of networks generated from information needs represented in the TREC 2005 genomics track and a query log analysis of real PubMed users. Together, data suggest that related article search is a useful feature and that browsing related articles has become an integral part of how users interact with PubMed.
    Source
    Information processing and management. 44(2008) no.5, S.1771-1783
  8. Bettencourt, N.; Silva, N.; Barroso, J.: Semantically enhancing recommender systems (2016) 0.03
    0.027734347 = product of:
      0.08320304 = sum of:
        0.017971063 = weight(_text_:information in 3374) [ClassicSimilarity], result of:
          0.017971063 = score(doc=3374,freq=8.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.23274569 = fieldWeight in 3374, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3374)
        0.06523198 = weight(_text_:networks in 3374) [ClassicSimilarity], result of:
          0.06523198 = score(doc=3374,freq=2.0), product of:
            0.20804176 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.043984205 = queryNorm
            0.31355235 = fieldWeight in 3374, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.046875 = fieldNorm(doc=3374)
      0.33333334 = coord(2/6)
    
    Abstract
    As the amount of content and the number of users in social relationships is continually growing in the Internet, resource sharing and access policy management is difficult, time-consuming and error-prone. Cross-domain recommendation of private or protected resources managed and secured by each domain's specific access rules is impracticable due to private security policies and poor sharing mechanisms. This work focus on exploiting resource's content, user's preferences, users' social networks and semantic information to cross-relate different resources through their meta information using recommendation techniques that combine collaborative-filtering techniques with semantics annotations, by generating associations between resources. The semantic similarities established between resources are used on a hybrid recommendation engine that interprets user and resources' semantic information. The recommendation engine allows the promotion and discovery of unknown-unknown resources to users that could not even know about the existence of those resources thus providing means to solve the cross-domain recommendation of private or protected resources.
    Series
    Communications in computer and information science; 631
  9. Roy, R.S.; Agarwal, S.; Ganguly, N.; Choudhury, M.: Syntactic complexity of Web search queries through the lenses of language models, networks and users (2016) 0.02
    0.02244316 = product of:
      0.06732948 = sum of:
        0.0129694985 = weight(_text_:information in 3188) [ClassicSimilarity], result of:
          0.0129694985 = score(doc=3188,freq=6.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.16796975 = fieldWeight in 3188, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3188)
        0.054359984 = weight(_text_:networks in 3188) [ClassicSimilarity], result of:
          0.054359984 = score(doc=3188,freq=2.0), product of:
            0.20804176 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.043984205 = queryNorm
            0.26129362 = fieldWeight in 3188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3188)
      0.33333334 = coord(2/6)
    
    Abstract
    Across the world, millions of users interact with search engines every day to satisfy their information needs. As the Web grows bigger over time, such information needs, manifested through user search queries, also become more complex. However, there has been no systematic study that quantifies the structural complexity of Web search queries. In this research, we make an attempt towards understanding and characterizing the syntactic complexity of search queries using a multi-pronged approach. We use traditional statistical language modeling techniques to quantify and compare the perplexity of queries with natural language (NL). We then use complex network analysis for a comparative analysis of the topological properties of queries issued by real Web users and those generated by statistical models. Finally, we conduct experiments to study whether search engine users are able to identify real queries, when presented along with model-generated ones. The three complementary studies show that the syntactic structure of Web queries is more complex than what n-grams can capture, but simpler than NL. Queries, thus, seem to represent an intermediate stage between syntactic and non-syntactic communication.
    Source
    Information processing and management. 52(2016) no.5, S.923-948
  10. Boyack, K.W.; Wylie,B.N.; Davidson, G.S.: Information Visualization, Human-Computer Interaction, and Cognitive Psychology : Domain Visualizations (2002) 0.02
    0.019038055 = product of:
      0.057114165 = sum of:
        0.014975886 = weight(_text_:information in 1352) [ClassicSimilarity], result of:
          0.014975886 = score(doc=1352,freq=2.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.19395474 = fieldWeight in 1352, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=1352)
        0.04213828 = product of:
          0.08427656 = sum of:
            0.08427656 = weight(_text_:22 in 1352) [ClassicSimilarity], result of:
              0.08427656 = score(doc=1352,freq=4.0), product of:
                0.1540252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043984205 = queryNorm
                0.54716086 = fieldWeight in 1352, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1352)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Date
    22. 2.2003 17:25:39
    22. 2.2003 18:17:40
  11. Menczer, F.: Lexical and semantic clustering by Web links (2004) 0.02
    0.018971303 = product of:
      0.05691391 = sum of:
        0.012707461 = weight(_text_:information in 3090) [ClassicSimilarity], result of:
          0.012707461 = score(doc=3090,freq=4.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.16457605 = fieldWeight in 3090, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3090)
        0.044206448 = product of:
          0.088412896 = sum of:
            0.088412896 = weight(_text_:states in 3090) [ClassicSimilarity], result of:
              0.088412896 = score(doc=3090,freq=2.0), product of:
                0.24220218 = queryWeight, product of:
                  5.506572 = idf(docFreq=487, maxDocs=44218)
                  0.043984205 = queryNorm
                0.3650376 = fieldWeight in 3090, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.506572 = idf(docFreq=487, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3090)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Recent Web-searching and -mining tools are combining text and link analysis to improve ranking and crawling algorithms. The central assumption behind such approaches is that there is a correiation between the graph structure of the Web and the text and meaning of pages. Here I formalize and empirically evaluate two general conjectures drawing connections from link information to lexical and semantic Web content. The link-content conjecture states that a page is similar to the pages that link to it, and the link-cluster conjecture that pages about the same topic are clustered together. These conjectures are offen simply assumed to hold, and Web search tools are built an such assumptions. The present quantitative confirmation sheds light an the connection between the success of the latest Web-mining techniques and the small world topology of the Web, with encouraging implications for the design of better crawling algorithms.
    Source
    Journal of the American Society for Information Science and Technology. 55(2004) no.14, S.1261-1269
  12. Sebastian, Y.: Literature-based discovery by learning heterogeneous bibliographic information networks (2017) 0.02
    0.018960943 = product of:
      0.05688283 = sum of:
        0.01339484 = weight(_text_:information in 535) [ClassicSimilarity], result of:
          0.01339484 = score(doc=535,freq=10.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.1734784 = fieldWeight in 535, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=535)
        0.04348799 = weight(_text_:networks in 535) [ClassicSimilarity], result of:
          0.04348799 = score(doc=535,freq=2.0), product of:
            0.20804176 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.043984205 = queryNorm
            0.2090349 = fieldWeight in 535, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.03125 = fieldNorm(doc=535)
      0.33333334 = coord(2/6)
    
    Abstract
    Literature-based discovery (LBD) research aims at finding effective computational methods for predicting previously unknown connections between clusters of research papers from disparate research areas. Existing methods encompass two general approaches. The first approach searches for these unknown connections by examining the textual contents of research papers. In addition to the existing textual features, the second approach incorporates structural features of scientific literatures, such as citation structures. These approaches, however, have not considered research papers' latent bibliographic metadata structures as important features that can be used for predicting previously unknown relationships between them. This thesis investigates a new graph-based LBD method that exploits the latent bibliographic metadata connections between pairs of research papers. The heterogeneous bibliographic information network is proposed as an efficient graph-based data structure for modeling the complex relationships between these metadata. In contrast to previous approaches, this method seamlessly combines textual and citation information in the form of pathbased metadata features for predicting future co-citation links between research papers from disparate research fields. The results reported in this thesis provide evidence that the method is effective for reconstructing the historical literature-based discovery hypotheses. This thesis also investigates the effects of semantic modeling and topic modeling on the performance of the proposed method. For semantic modeling, a general-purpose word sense disambiguation technique is proposed to reduce the lexical ambiguity in the title and abstract of research papers. The experimental results suggest that the reduced lexical ambiguity did not necessarily lead to a better performance of the method. This thesis discusses some of the possible contributing factors to these results. Finally, topic modeling is used for learning the latent topical relations between research papers. The learned topic model is incorporated into the heterogeneous bibliographic information network graph and allows new predictive features to be learned. The results in this thesis suggest that topic modeling improves the performance of the proposed method by increasing the overall accuracy for predicting the future co-citation links between disparate research papers.
    Footnote
    A thesis submitted in ful llment of the requirements for the degree of Doctor of Philosophy Monash University, Faculty of Information Technology.
  13. Bilal, D.; Kirby, J.: Differences and similarities in information seeking : children and adults as Web users (2002) 0.02
    0.01878385 = product of:
      0.05635155 = sum of:
        0.014673311 = weight(_text_:information in 2591) [ClassicSimilarity], result of:
          0.014673311 = score(doc=2591,freq=12.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.19003606 = fieldWeight in 2591, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=2591)
        0.04167824 = product of:
          0.08335648 = sum of:
            0.08335648 = weight(_text_:states in 2591) [ClassicSimilarity], result of:
              0.08335648 = score(doc=2591,freq=4.0), product of:
                0.24220218 = queryWeight, product of:
                  5.506572 = idf(docFreq=487, maxDocs=44218)
                  0.043984205 = queryNorm
                0.34416074 = fieldWeight in 2591, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.506572 = idf(docFreq=487, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2591)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    This study examined the success and information seeking behaviors of seventh-grade science students and graduate students in information science in using Yahooligans! Web search engine/directory. It investigated these users' cognitive, affective, and physical behaviors as they sought the answer for a fact-finding task. It analyzed and compared the overall patterns of children's and graduate students' Web activities, including searching moves, browsing moves, backtracking moves, looping moves, screen scrolling, target location and deviation moves, and the time they took to complete the task. The authors applied Bilal's Web Traversal Measure to quantify these users' effectiveness, efficiency, and quality of moves they made. Results were based on 14 children's Web sessions and nine graduate students' sessions. Both groups' Web activities were captured online using Lotus ScreenCam, a software package that records and replays online activities in Web browsers. Children's affective states were captured via exit interviews. Graduate students' affective states were extracted from the journal writings they kept during the traversal process. The study findings reveal that 89% of the graduate students found the correct answer to the search task as opposed to 50% of the children. Based on the Measure, graduate students' weighted effectiveness, efficiency, and quality of the Web moves they made were much higher than those of the children. Regardless of success and weighted scores, however, similarities and differences in information seeking were found between the two groups. Yahooligans! poor structure of keyword searching was a major factor that contributed to the "breakdowns" children and graduate students experienced. Unlike children, graduate students were able to recover from "breakdowns" quickly and effectively. Three main factors influenced these users' performance: ability to recover from "breakdowns", navigational style, and focus on task. Children and graduate students made recommendations for improving Yahooligans! interface design. Implications for Web user training and system design improvements are made.
    Footnote
    Beitrag in einem Themenheft: "Issues of context in information retrieval (IR)"
    Source
    Information processing and management. 38(2002) no.5, S.649-670
  14. Principles of semantic networks : explorations in the representation of knowledge (1991) 0.02
    0.018119995 = product of:
      0.10871997 = sum of:
        0.10871997 = weight(_text_:networks in 1677) [ClassicSimilarity], result of:
          0.10871997 = score(doc=1677,freq=2.0), product of:
            0.20804176 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.043984205 = queryNorm
            0.52258724 = fieldWeight in 1677, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.078125 = fieldNorm(doc=1677)
      0.16666667 = coord(1/6)
    
  15. Sacco, G.M.: Dynamic taxonomies and guided searches (2006) 0.02
    0.015884697 = product of:
      0.047654092 = sum of:
        0.018157298 = weight(_text_:information in 5295) [ClassicSimilarity], result of:
          0.018157298 = score(doc=5295,freq=6.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.23515764 = fieldWeight in 5295, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5295)
        0.029496796 = product of:
          0.058993593 = sum of:
            0.058993593 = weight(_text_:22 in 5295) [ClassicSimilarity], result of:
              0.058993593 = score(doc=5295,freq=4.0), product of:
                0.1540252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043984205 = queryNorm
                0.38301262 = fieldWeight in 5295, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5295)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    A new search paradigm, in which the primary user activity is the guided exploration of a complex information space rather than the retrieval of items based on precise specifications, is proposed. The author claims that this paradigm is the norm in most practical applications, and that solutions based on traditional search methods are not effective in this context. He then presents a solution based on dynamic taxonomies, a knowledge management model that effectively guides users to reach their goal while giving them total freedom in exploring the information base. Applications, benefits, and current research are discussed.
    Date
    22. 7.2006 17:56:22
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.6, S.792-796
  16. Ma, N.; Zheng, H.T.; Xiao, X.: ¬An ontology-based latent semantic indexing approach using long short-term memory networks (2017) 0.02
    0.015692377 = product of:
      0.09415426 = sum of:
        0.09415426 = weight(_text_:networks in 3810) [ClassicSimilarity], result of:
          0.09415426 = score(doc=3810,freq=6.0), product of:
            0.20804176 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.043984205 = queryNorm
            0.45257387 = fieldWeight in 3810, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3810)
      0.16666667 = coord(1/6)
    
    Abstract
    Nowadays, online data shows an astonishing increase and the issue of semantic indexing remains an open question. Ontologies and knowledge bases have been widely used to optimize performance. However, researchers are placing increased emphasis on internal relations of ontologies but neglect latent semantic relations between ontologies and documents. They generally annotate instances mentioned in documents, which are related to concepts in ontologies. In this paper, we propose an Ontology-based Latent Semantic Indexing approach utilizing Long Short-Term Memory networks (LSTM-OLSI). We utilize an importance-aware topic model to extract document-level semantic features and leverage ontologies to extract word-level contextual features. Then we encode the above two levels of features and match their embedding vectors utilizing LSTM networks. Finally, the experimental results reveal that LSTM-OLSI outperforms existing techniques and demonstrates deep comprehension of instances and articles.
  17. Kopácsi, S. et al.: Development of a classification server to support metadata harmonization in a long term preservation system (2016) 0.01
    0.014924051 = product of:
      0.04477215 = sum of:
        0.014975886 = weight(_text_:information in 3280) [ClassicSimilarity], result of:
          0.014975886 = score(doc=3280,freq=2.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.19395474 = fieldWeight in 3280, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=3280)
        0.029796265 = product of:
          0.05959253 = sum of:
            0.05959253 = weight(_text_:22 in 3280) [ClassicSimilarity], result of:
              0.05959253 = score(doc=3280,freq=2.0), product of:
                0.1540252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043984205 = queryNorm
                0.38690117 = fieldWeight in 3280, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3280)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Series
    Communications in computer and information science; 672
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  18. Zhang, J.; Mostafa, J.; Tripathy, H.: Information retrieval by semantic analysis and visualization of the concept space of D-Lib® magazine (2002) 0.01
    0.014354773 = product of:
      0.04306432 = sum of:
        0.015884325 = weight(_text_:information in 1211) [ClassicSimilarity], result of:
          0.015884325 = score(doc=1211,freq=36.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.20572007 = fieldWeight in 1211, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1211)
        0.027179992 = weight(_text_:networks in 1211) [ClassicSimilarity], result of:
          0.027179992 = score(doc=1211,freq=2.0), product of:
            0.20804176 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.043984205 = queryNorm
            0.13064681 = fieldWeight in 1211, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1211)
      0.33333334 = coord(2/6)
    
    Abstract
    In this article we present a method for retrieving documents from a digital library through a visual interface based on automatically generated concepts. We used a vocabulary generation algorithm to generate a set of concepts for the digital library and a technique called the max-min distance technique to cluster them. Additionally, the concepts were visualized in a spring embedding graph layout to depict the semantic relationship among them. The resulting graph layout serves as an aid to users for retrieving documents. An online archive containing the contents of D-Lib Magazine from July 1995 to May 2002 was used to test the utility of an implemented retrieval and visualization system. We believe that the method developed and tested can be applied to many different domains to help users get a better understanding of online document collections and to minimize users' cognitive load during execution of search tasks. Over the past few years, the volume of information available through the World Wide Web has been expanding exponentially. Never has so much information been so readily available and shared among so many people. Unfortunately, the unstructured nature and huge volume of information accessible over networks have made it hard for users to sift through and find relevant information. To deal with this problem, information retrieval (IR) techniques have gained more intensive attention from both industrial and academic researchers. Numerous IR techniques have been developed to help deal with the information overload problem. These techniques concentrate on mathematical models and algorithms for retrieval. Popular IR models such as the Boolean model, the vector-space model, the probabilistic model and their variants are well established.
    From the user's perspective, however, it is still difficult to use current information retrieval systems. Users frequently have problems expressing their information needs and translating those needs into queries. This is partly due to the fact that information needs cannot be expressed appropriately in systems terms. It is not unusual for users to input search terms that are different from the index terms information systems use. Various methods have been proposed to help users choose search terms and articulate queries. One widely used approach is to incorporate into the information system a thesaurus-like component that represents both the important concepts in a particular subject area and the semantic relationships among those concepts. Unfortunately, the development and use of thesauri is not without its own problems. The thesaurus employed in a specific information system has often been developed for a general subject area and needs significant enhancement to be tailored to the information system where it is to be used. This thesaurus development process, if done manually, is both time consuming and labor intensive. Usage of a thesaurus in searching is complex and may raise barriers for the user. For illustration purposes, let us consider two scenarios of thesaurus usage. In the first scenario the user inputs a search term and the thesaurus then displays a matching set of related terms. Without an overview of the thesaurus - and without the ability to see the matching terms in the context of other terms - it may be difficult to assess the quality of the related terms in order to select the correct term. In the second scenario the user browses the whole thesaurus, which is organized as in an alphabetically ordered list. The problem with this approach is that the list may be long, and neither does it show users the global semantic relationship among all the listed terms.
    Nevertheless, because thesaurus use has shown to improve retrieval, for our method we integrate functions in the search interface that permit users to explore built-in search vocabularies to improve retrieval from digital libraries. Our method automatically generates the terms and their semantic relationships representing relevant topics covered in a digital library. We call these generated terms the "concepts", and the generated terms and their semantic relationships we call the "concept space". Additionally, we used a visualization technique to display the concept space and allow users to interact with this space. The automatically generated term set is considered to be more representative of subject area in a corpus than an "externally" imposed thesaurus, and our method has the potential of saving a significant amount of time and labor for those who have been manually creating thesauri as well. Information visualization is an emerging discipline and developed very quickly in the last decade. With growing volumes of documents and associated complexities, information visualization has become increasingly important. Researchers have found information visualization to be an effective way to use and understand information while minimizing a user's cognitive load. Our work was based on an algorithmic approach of concept discovery and association. Concepts are discovered using an algorithm based on an automated thesaurus generation procedure. Subsequently, similarities among terms are computed using the cosine measure, and the associations among terms are established using a method known as max-min distance clustering. The concept space is then visualized in a spring embedding graph, which roughly shows the semantic relationships among concepts in a 2-D visual representation. The semantic space of the visualization is used as a medium for users to retrieve the desired documents. In the remainder of this article, we present our algorithmic approach of concept generation and clustering, followed by description of the visualization technique and interactive interface. The paper ends with key conclusions and discussions on future work.
  19. Case, D.O.: Looking for information : a survey on research on information seeking, needs, and behavior (2002) 0.01
    0.014065161 = product of:
      0.042195484 = sum of:
        0.02009226 = weight(_text_:information in 1270) [ClassicSimilarity], result of:
          0.02009226 = score(doc=1270,freq=40.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.2602176 = fieldWeight in 1270, product of:
              6.3245554 = tf(freq=40.0), with freq of:
                40.0 = termFreq=40.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1270)
        0.022103224 = product of:
          0.044206448 = sum of:
            0.044206448 = weight(_text_:states in 1270) [ClassicSimilarity], result of:
              0.044206448 = score(doc=1270,freq=2.0), product of:
                0.24220218 = queryWeight, product of:
                  5.506572 = idf(docFreq=487, maxDocs=44218)
                  0.043984205 = queryNorm
                0.1825188 = fieldWeight in 1270, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.506572 = idf(docFreq=487, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1270)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Footnote
    Rez. in: JASIST 54(2003) no.7, S.695-697 (R. Savolainen): "Donald O. Case has written an ambitious book to create an overall picture of the major approaches to information needs and seeking (INS) studies. The aim to write an extensive review is reflected in the list of references containing about 700 items. The high ambitions are explained an p. 14, where Case states that he is aiming at a multidisciplinary understanding of the concept of information seeking. In the Preface, the author characterizes his book as an introduction to the topic for students at the graduate level, as well as as a review and handbook for scholars engagged in information behavior research. In my view, Looking for Information is particularly welcome as an academic textbook because the field of INS studies suffers from the lack of monographs. Along with the continuous growth of the number of journal articles and conference papers, there is a genuine need for a book that picks up the numerous pieces and puts them together. The use of the study as a textbook is facilitated by clearly delineated sections an major themes and the wealth of concrete examples of information seeking in everyday contexts. The book is lucidly written and it is accessible to novice readers, too. At first glance, the idea of providing a comprehensive review of INS studies may seem a mission impossible because the current number of articles, papers, and other contributions in this field is nearing the 10,000 range (p. 224). Donald Case is not alone in the task of coming to grips with an increasing number of studies; similar problems have been faced by those writing INS-related chapters for the Annual Review of Information Science and Technology (ARIST). Case has solved the problem of "too many publications to be reviewed" by concentrating an the INS literature published during the last two decades. Secondly, studies an library use and information retrieval are discussed only to a limited extent. In addition, Case is highly selective as to studies focusing an the use of specific sources and channels such as WWW. These delineations are reasonable, even though they beg some questions. First, how should one draw the line between studies an information seeking and information retrieval? Case does not discuss this question in greater detail, although in recent years, the overlapping areas of information seeking and retrieval studies have been broadened, along with the growing importance of WWW in information seeking/retrieval. Secondly, how can one define the concept of information searching (or, more specifically, Internet or Web searching) in relation to information seeking and information retrieval? In the field of Web searching studies, there is an increasing number of contributions that are of direct relevance to information-seeking studies. Clearly, the advent of the Internet, particularly, the Web, has blurred the previous lines between INS and IR literature, making them less clear cut. The book consists of five main sections, and comprises 13 chapters. There is an Appendix serving the needs of an INS textbook (questions for discussion and application). The structure of the book is meticulously planned and, as a whole, it offers a sufficiently balanced contribution to theoretical, methodological, and empirical issues of INS. The title, Looking for Information: A Survey of Research an Information Seeking, Needs, and Behavior aptly describes the main substance of the book. . . . It is easy to agree with Case about the significance of the problem of specialization and fragmentation. This problem seems to be concomitant with the broadening field of INS research. In itself, Case's book can be interpreted as a struggle against this fragmentation. His book suggests that this struggle is not hopeless and that it is still possible to draw an overall picture of the evolving research field. The major pieces of the puzzle were found and the book will provide a useful overview of INS studies for many years."
    Series
    Library and information science
  20. Klas, C.-P.; Fuhr, N.; Schaefer, A.: Evaluating strategic support for information access in the DAFFODIL system (2004) 0.01
    0.013295909 = product of:
      0.039887726 = sum of:
        0.022009967 = weight(_text_:information in 2419) [ClassicSimilarity], result of:
          0.022009967 = score(doc=2419,freq=12.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.2850541 = fieldWeight in 2419, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2419)
        0.017877758 = product of:
          0.035755515 = sum of:
            0.035755515 = weight(_text_:22 in 2419) [ClassicSimilarity], result of:
              0.035755515 = score(doc=2419,freq=2.0), product of:
                0.1540252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043984205 = queryNorm
                0.23214069 = fieldWeight in 2419, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2419)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    The digital library system Daffodil is targeted at strategic support of users during the information search process. For searching, exploring and managing digital library objects it provides user-customisable information seeking patterns over a federation of heterogeneous digital libraries. In this paper evaluation results with respect to retrieval effectiveness, efficiency and user satisfaction are presented. The analysis focuses on strategic support for the scientific work-flow. Daffodil supports the whole work-flow, from data source selection over information seeking to the representation, organisation and reuse of information. By embedding high level search functionality into the scientific work-flow, the user experiences better strategic system support due to a more systematic work process. These ideas have been implemented in Daffodil followed by a qualitative evaluation. The evaluation has been conducted with 28 participants, ranging from information seeking novices to experts. The results are promising, as they support the chosen model.
    Date
    16.11.2008 16:22:48

Years

Languages

  • e 189
  • d 19
  • chi 1
  • f 1
  • More… Less…

Types

  • a 184
  • el 16
  • m 16
  • r 4
  • p 2
  • s 2
  • x 2
  • More… Less…