Search (728 results, page 2 of 37)

  • × year_i:[2010 TO 2020}
  1. Chu, H.: Information representation and retrieval in the digital age (2010) 0.03
    0.028678946 = product of:
      0.057357892 = sum of:
        0.057357892 = product of:
          0.114715785 = sum of:
            0.114715785 = weight(_text_:ii in 377) [ClassicSimilarity], result of:
              0.114715785 = score(doc=377,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.41776034 = fieldWeight in 377, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=377)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Information representation and retrieval : an overview -- Information representation I : basic approaches -- Information representation II : related topics -- Language in information representation and retrieval -- Retrieval techniques and query representation -- Retrieval approaches -- Information retrieval models -- Information retrieval systems -- Retrieval of information unique in content or format -- The user dimension in information representation and retrieval -- Evaluation of information representation and retrieval -- Artificial intelligence in information representation and retrieval.
  2. Munkelt, J.: Erstellung einer DNB-Retrieval-Testkollektion (2018) 0.03
    0.028678946 = product of:
      0.057357892 = sum of:
        0.057357892 = product of:
          0.114715785 = sum of:
            0.114715785 = weight(_text_:ii in 4310) [ClassicSimilarity], result of:
              0.114715785 = score(doc=4310,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.41776034 = fieldWeight in 4310, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4310)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    II, 79 S
  3. Fühles-Ubach, S.; Schaer, P.; Lepsky, K.; Seidler-de Alwis, R.: Data Librarian : ein neuer Studienschwerpunkt für wissenschaftliche Bibliotheken und Forschungseinrichtungen (2019) 0.03
    0.028678946 = product of:
      0.057357892 = sum of:
        0.057357892 = product of:
          0.114715785 = sum of:
            0.114715785 = weight(_text_:ii in 5836) [ClassicSimilarity], result of:
              0.114715785 = score(doc=5836,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.41776034 = fieldWeight in 5836, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5836)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Der Beitrag beschäftigt sich mit dem neuen Studienschwerpunkt "Data Librarian" im Studiengang "Data and Information Science", der seit dem Wintersemester 2018/19 am Institut für Informationswissenschaft der Technischen Hochschule Köln angeboten wird. Im Rahmen einer gemeinsamen Akkreditierung aller Bachelor-Studiengänge des Instituts entwickelt, bündelt bzw. vermittelt er u. a. umfassende Kenntnisse in den Bereichen Datenstrukturen, Datenverarbeitung, Informationssysteme, Datenanalyse und Information Research in den ersten Semestern. Das sechsmonatige Praxissemester findet in einer wissenschaftlichen Bibliothek oder Informationseinrichtung statt, bevor die Schwerpunkte Forschungsdaten I+II, Wissenschaftskommunikation, Szientometrie und automatische Erschließung vermittelt werden.
  4. Cai, F.; Wang, S.; Rijke, M.de: Behavior-based personalization in web search (2017) 0.03
    0.028384795 = product of:
      0.05676959 = sum of:
        0.05676959 = product of:
          0.11353918 = sum of:
            0.11353918 = weight(_text_:ii in 3527) [ClassicSimilarity], result of:
              0.11353918 = score(doc=3527,freq=6.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.4134755 = fieldWeight in 3527, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3527)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Personalized search approaches tailor search results to users' current interests, so as to help improve the likelihood of a user finding relevant documents for their query. Previous work on personalized search focuses on using the content of the user's query and of the documents clicked to model the user's preference. In this paper we focus on a different type of signal: We investigate the use of behavioral information for the purpose of search personalization. That is, we consider clicks and dwell time for reranking an initially retrieved list of documents. In particular, we (i) investigate the impact of distributions of users and queries on document reranking; (ii) estimate the relevance of a document for a query at 2 levels, at the query-level and at the word-level, to alleviate the problem of sparseness; and (iii) perform an experimental evaluation both for users seen during the training period and for users not seen during training. For the latter, we explore the use of information from similar users who have been seen during the training period. We use the dwell time on clicked documents to estimate a document's relevance to a query, and perform Bayesian probabilistic matrix factorization to generate a relevance distribution of a document over queries. Our experiments show that: (i) for personalized ranking, behavioral information helps to improve retrieval effectiveness; and (ii) given a query, merging information inferred from behavior of a particular user and from behaviors of other users with a user-dependent adaptive weight outperforms any combination with a fixed weight.
    Footnote
    A preliminary version of this paper was published in the proceedings of SIGIR '14. In this extension, we (i) extend the behavioral personalization search model introduced there to deal with queries issued by new users for whom long-term search logs are unavailable; (ii) examine the impact of sparseness on the performance of our model by considering both word-level and query-level modeling, as we find that the word-document relevance matrix is less sparse than the query-document relevance matrix; (iii) investigate the effectiveness of our behavior-based reranking model with and without assuming a uniform distribution of users as users may behave differently; (iv) include more related work and provide a detailed discussion of the experimental results.
  5. (2013 ff.) 0.03
    0.027550334 = product of:
      0.05510067 = sum of:
        0.05510067 = product of:
          0.11020134 = sum of:
            0.11020134 = weight(_text_:22 in 2851) [ClassicSimilarity], result of:
              0.11020134 = score(doc=2851,freq=2.0), product of:
                0.1780192 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050836053 = queryNorm
                0.61904186 = fieldWeight in 2851, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=2851)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    20. 1.2015 18:30:22
  6. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.03
    0.02691371 = product of:
      0.05382742 = sum of:
        0.05382742 = product of:
          0.16148226 = sum of:
            0.16148226 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.16148226 = score(doc=5820,freq=2.0), product of:
                0.4309886 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050836053 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  7. Kabinett beschließt Deutsche Digitale Bibliothek (2010) 0.02
    0.024581954 = product of:
      0.049163908 = sum of:
        0.049163908 = product of:
          0.098327816 = sum of:
            0.098327816 = weight(_text_:ii in 3243) [ClassicSimilarity], result of:
              0.098327816 = score(doc=3243,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.3580803 = fieldWeight in 3243, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3243)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    "Bereits Anfang Dezember 2009 hat das Bundeskabinett die Errichtung der Deutschen Digitalen Bibliothek (DDB) beschlossen, deren Start 2011 erfolgen soll. Kultur- und Medienstaatsminister Bernd Neumann erklärte: »Durch die DDB werden in Zukunft Datenbanken von über 30 000 Kultur- und Wissenschaftseinrichtungen in Deutschland vernetzt und über ein einziges nationales Portal allen Bürgern zugänglich gemacht werden. Sie ist ein Jahrhundertprojekt in der digitalen Welt und leistet einen herausragenden Beitrag zur Bewahrung unserer kulturellen Identität und zum Urheberrechtsschutz.« Vorgesehen ist, dass die DDB digitale Kopien von Büchern, Bildern, Archivalien, Skulpturen, Noten, Musik und Filmen aus Kultur- und Wissenschaftseinrichtungen (Bibliotheken, Archiven, Museen, Mediatheken, Kulturdenkmalen, wissenschaftlichen Instituten et cetara) umfasst. Die DDB ist ein Gemeinschaftsvorhaben von Bund, Ländern und Kommunen. Der Aufbau der zentralen Infrastruktur wird mit Mitteln aus dem Konjunkturprogramm II des Bundes finanziert, der Dauerbetrieb ab 2011 zur Hälfte von Bund und Ländern."
  8. Latif, A.: Understanding linked open data : for linked data discovery, consumption, triplification and application development (2011) 0.02
    0.024581954 = product of:
      0.049163908 = sum of:
        0.049163908 = product of:
          0.098327816 = sum of:
            0.098327816 = weight(_text_:ii in 128) [ClassicSimilarity], result of:
              0.098327816 = score(doc=128,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.3580803 = fieldWeight in 128, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.046875 = fieldNorm(doc=128)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Linked Open Data initiative has played a vital role in the realization of the Semantic Web at a global scale by publishing and interlinking diverse data sources on the Web. Access to this huge amount of Linked Data presents exciting benefits and opportunities. However, the inherent complexity attached to Linked Data understanding, lack of potential use cases and applications which can consume Linked Data hinders its full exploitation by naïve web users and developers. This book aims to address these core limitations of Linked Open Data and contributes by presenting: (i) Conceptual model for fundamental understanding of Linked Open Data sphere, (ii) Linked Data application to search, consume and aggregate various Linked Data resources, (iii) Semantification and interlinking technique for conversion of legacy data, and (iv) Potential application areas of Linked Open Data.
  9. Ceynowa, K.: Informationsdienste im mobilen Internet : das Beispiel der Bayerischen Staatsbibliothek (2011) 0.02
    0.024581954 = product of:
      0.049163908 = sum of:
        0.049163908 = product of:
          0.098327816 = sum of:
            0.098327816 = weight(_text_:ii in 197) [ClassicSimilarity], result of:
              0.098327816 = score(doc=197,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.3580803 = fieldWeight in 197, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.046875 = fieldNorm(doc=197)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Ausgehend von der Überzeugung, dass der Zugriff auf digitale Information künftig primär,wenn nicht sogar ausschließlich über mobile Endgeräte wie Smartphones und Tablets erfolgen wird, stellt die Bayerische Staatsbibliothek gegenwärtig ihre Basisdienste ebenso wie ihre digitalen Content-Angebote sukzessive als mobile Applikationen bereit. Zuerst wurden der Online-Katalog und die Website der Bibliothek als generische, auf allen gängigen Smartphone-Browsern lauffähige, mobile Applikationen programmiert. In einem weiteren Schritt hat die Bayerische Staatsbibliothek 5o digitalisierte Spitzenstücke ihres Bestandes als native App »Famous Books -Treasures of the Bavarian State Library« für iPad und iPhone bereitgestellt, darauf folgte im Frühling 2011 die App »Islamic Books - Oriental treasures of the Bavarian State Library«. Aktuell experimentiert die Bayerische Staatsbibliothek zudem mit Augmented-Reality-Anwendungen. In einer mobilen Applikation »Ludwig II.« soll digitalisierter Bibliothekscontent zum berühmten bayerischen »Märchenkönig« georeferenziert an herausragenden Wirkungsstätten des Königs wie Schloss Neuschwanstein als Augmented-Reality-Applikation angeboten werden. Der Artikel stellt die verschiedenen mobilen Services und Anwendungen der Bayerischen Staatsbibliothek vor, beleuchtet ihre technische Realisierung und bewertet die Chancen und Grenzen bibliothekarischer Dienste im mobilen Internet.
  10. Aerts, D.; Broekaert, J.; Sozzo, S.; Veloz, T.: Meaning-focused and quantum-inspired information retrieval (2013) 0.02
    0.024581954 = product of:
      0.049163908 = sum of:
        0.049163908 = product of:
          0.098327816 = sum of:
            0.098327816 = weight(_text_:ii in 735) [ClassicSimilarity], result of:
              0.098327816 = score(doc=735,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.3580803 = fieldWeight in 735, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.046875 = fieldNorm(doc=735)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In recent years, quantum-based methods have promisingly integrated the traditional procedures in information retrieval (IR) and natural language processing (NLP). Inspired by our research on the identification and application of quantum structures in cognition, more specifically our work on the representation of concepts and their combinations, we put forward a 'quantum meaning based' framework for structured query retrieval in text corpora and standardized testing corpora. This scheme for IR rests on considering as basic notions, (i) 'entities of meaning', e.g., concepts and their combinations and (ii) traces of such entities of meaning, which is how documents are considered in this approach. The meaning content of these 'entities of meaning' is reconstructed by solving an 'inverse problem' in the quantum formalism, consisting of reconstructing the full states of the entities of meaning from their collapsed states identified as traces in relevant documents. The advantages with respect to traditional approaches, such as Latent Semantic Analysis (LSA), are discussed by means of concrete examples.
  11. Zapilko, B.: InFoLiS (2017) 0.02
    0.024581954 = product of:
      0.049163908 = sum of:
        0.049163908 = product of:
          0.098327816 = sum of:
            0.098327816 = weight(_text_:ii in 1031) [ClassicSimilarity], result of:
              0.098327816 = score(doc=1031,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.3580803 = fieldWeight in 1031, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1031)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Die von der DFG geförderte InFoLiS-Projektreihe wurde dieses Jahr erfolgreich abgeschlossen. Die Projekte wurden von GESIS - Leibniz-Institut für Sozialwissenschaften, der Universitätsbibliothek Mannheim und der Hochschule der Medien Stuttgart durchgeführt. Ziel der Projekte InFoLiS I und InFoLiS II war die Entwicklung von Verfahren zur Verknüpfung von Forschungsdaten und Literatur. Diese Verknüpfung kann einen erheblichen Mehrwert für Recherchesystem in Informationsinfrastrukturen wie Bibliotheken und Forschungsdatenzentren für die Recherche der Nutzerinnen und Nutzer darstellen. Die Projektergebnisse im Einzelnen sind: - Entwicklung von Verfahren für die automatische Verknüpfung von Publikationen und Forschungsdaten - Integration dieser Verknüpfungen in die Recherchesysteme der Projektpartner - Automatische Verschlagwortung von Forschungsdaten - Überführung der entwickelten Verfahren in eine Linked Open Data-basierte nachnutzbare Infrastruktur mit Webservices und APIs - Anwendung der Verfahren auf einer disziplinübergreifenden und mehrsprachigen Datenbasis - Nachnutzbarkeit der Links durch die Verwendung einer Forschungsdatenontologie Weitere Informationen finden sich auf der Projekthomepage [http://infolis.github.io/]. Sämtliche Projektergebnisse inklusive Quellcode stehen Open Source auf unserer GitHub-Seite [http://www.github.com/infolis/] für eine Nachnutzung zur Verfügung. Bei Interesse an einer Nachnutzung oder Weiterentwicklung Kontakt-E-Mail (benjamin.zapilko@gesis.org<mailto:benjamin.zapilko@gesis.org>).
  12. Mohr, J.W.; Bogdanov, P.: Topic models : what they are and why they matter (2013) 0.02
    0.024581954 = product of:
      0.049163908 = sum of:
        0.049163908 = product of:
          0.098327816 = sum of:
            0.098327816 = weight(_text_:ii in 1142) [ClassicSimilarity], result of:
              0.098327816 = score(doc=1142,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.3580803 = fieldWeight in 1142, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1142)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We provide a brief, non-technical introduction to the text mining methodology known as "topic modeling." We summarize the theory and background of the method and discuss what kinds of things are found by topic models. Using a text corpus comprised of the eight articles from the special issue of Poetics on the subject of topic models, we run a topic model on these articles, both as a way to introduce the methodology and also to help summarize some of the ways in which social and cultural scientists are using topic models. We review some of the critiques and debates over the use of the method and finally, we link these developments back to some of the original innovations in the field of content analysis that were pioneered by Harold D. Lasswell and colleagues during and just after World War II.
  13. Shen, C.; Monge, P.; Williams, D.: ¬The evolution of social ties online : a longitudinal study in a massively multiplayer online game (2014) 0.02
    0.024581954 = product of:
      0.049163908 = sum of:
        0.049163908 = product of:
          0.098327816 = sum of:
            0.098327816 = weight(_text_:ii in 1498) [ClassicSimilarity], result of:
              0.098327816 = score(doc=1498,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.3580803 = fieldWeight in 1498, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1498)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    How do social ties in online worlds evolve over time? This research examined the dynamic processes of relationship formation, maintenance, and demise in a massively multiplayer online game. Drawing from evolutionary and ecological theories of social networks, this study focuses on the impact of three sets of evolutionary factors in the context of social relationships in the online game EverQuest II (EQII): the aging and maturation processes, social architecture of the game, and homophily and proximity. A longitudinal analysis of tie persistence and decay demonstrated the transient nature of social relationships in EQII, but ties became considerably more durable over time. Also, character level similarity, shared guild membership, and geographic proximity were powerful mechanisms in preserving social relationships.
  14. Luca, H.: "Immer mehr Studierende und Schüler" : Konzepte zur Vermittlung von Informationskompetenz in Bibliotheken für große Gruppen (2012) 0.02
    0.024581954 = product of:
      0.049163908 = sum of:
        0.049163908 = product of:
          0.098327816 = sum of:
            0.098327816 = weight(_text_:ii in 2446) [ClassicSimilarity], result of:
              0.098327816 = score(doc=2446,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.3580803 = fieldWeight in 2446, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2446)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Die Vermittlung von Informationskompetenz ist in vielen Bibliotheken bereits eine etablierte Kernaufgabe. Entsprechende Veranstaltungen nehmen als Zielgruppen vor allem Studierende und Schüler der Sekundarstufe II in den Blick. Dabei ergeben sich aktuell neue Herausforderungen daraus, dass einerseits beide Gruppen in den letzten Jahren immer größer werden, und andererseits das Thema Informationskompetenz aufgrund bildungspolitischer Entwicklungen wie der Bologna-Reform immer größere Bedeutung erlangt. Als Konsequenz steigt die Zahl der potenziellen und tatsächlichen Teilnehmer an bibliothekarischen Veranstaltungen zur Vermittlung von Informationskompetenz, sodass es auch Bibliotheken heute mit großen Gruppen zu tun haben. Im vorliegenden Artikel werden daher die Fragen behandelt, welche Probleme sich im Einzelnen bei die Vermittlung von Informationskompetenz im Hinblick auf Großgruppen stellen, und welche Konzepte dazu geeignet sind, diese Herausforderungen anzugehen und eine zielgruppenorientierte Vermittlung von Informationskompetenz mit geeigneten Mitteln für große Gruppen anzubieten. Verschiedene Ansätze, die sich bereits an Universitätsbibliotheken in Anwendung befinden, werden dazu hinsichtlich ihrer Eignung für Großgruppen bewertet. Die Autorin hofft damit betroffenen Bibliotheken und Bibliothekaren einige Informationen und Anregungen für die praktische Arbeit geben zu können.
  15. Abacha, A.B.; Zweigenbaum, P.: MEANS: A medical question-answering system combining NLP techniques and semantic Web technologies (2015) 0.02
    0.024581954 = product of:
      0.049163908 = sum of:
        0.049163908 = product of:
          0.098327816 = sum of:
            0.098327816 = weight(_text_:ii in 2677) [ClassicSimilarity], result of:
              0.098327816 = score(doc=2677,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.3580803 = fieldWeight in 2677, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2677)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Question Answering (QA) task aims to provide precise and quick answers to user questions from a collection of documents or a database. This kind of IR system is sorely needed with the dramatic growth of digital information. In this paper, we address the problem of QA in the medical domain where several specific conditions are met. We propose a semantic approach to QA based on (i) Natural Language Processing techniques, which allow a deep analysis of medical questions and documents and (ii) semantic Web technologies at both representation and interrogation levels. We present our Semantic Question-Answering System, called MEANS and our proposed method for "Answer Search" based on semantic search and query relaxation. We evaluate the overall system performance on real questions and answers extracted from MEDLINE articles. Our experiments show promising results and suggest that a query-relaxation strategy can further improve the overall performance.
  16. Flores, F.N.; Moreira, V.P.: Assessing the impact of stemming accuracy on information retrieval : a multilingual perspective (2016) 0.02
    0.024581954 = product of:
      0.049163908 = sum of:
        0.049163908 = product of:
          0.098327816 = sum of:
            0.098327816 = weight(_text_:ii in 3187) [ClassicSimilarity], result of:
              0.098327816 = score(doc=3187,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.3580803 = fieldWeight in 3187, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3187)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The quality of stemming algorithms is typically measured in two different ways: (i) how accurately they map the variant forms of a word to the same stem; or (ii) how much improvement they bring to Information Retrieval systems. In this article, we evaluate various stemming algorithms, in four languages, in terms of accuracy and in terms of their aid to Information Retrieval. The aim is to assess whether the most accurate stemmers are also the ones that bring the biggest gain in Information Retrieval. Experiments in English, French, Portuguese, and Spanish show that this is not always the case, as stemmers with higher error rates yield better retrieval quality. As a byproduct, we also identified the most accurate stemmers and the best for Information Retrieval purposes.
  17. Capurro, R.; Eldred, M.; Nagel, D.: Digital whoness : identity, privacy and freedom in the cyberworld (2013) 0.02
    0.024581954 = product of:
      0.049163908 = sum of:
        0.049163908 = product of:
          0.098327816 = sum of:
            0.098327816 = weight(_text_:ii in 3382) [ClassicSimilarity], result of:
              0.098327816 = score(doc=3382,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.3580803 = fieldWeight in 3382, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3382)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The first aim is to provide well-articulated concepts by thinking through elementary phenomena of today's world, focusing on privacy and the digital, to clarify who we are in the cyberworld - hence a phenomenology of digital whoness. The second aim is to engage critically, hermeneutically with older and current literature on privacy, including in today's emerging cyberworld. Phenomenological results include concepts of i) self-identity through interplay with the world, ii) personal privacy in contradistinction to the privacy of private property, iii) the cyberworld as an artificial, digital dimension in order to discuss iv) what freedom in the cyberworld can mean, whilst not neglecting v) intercultural aspects and vi) the EU context.
  18. Oh, K.E.: Types of personal information categorization : rigid, fuzzy, and flexible (2017) 0.02
    0.024581954 = product of:
      0.049163908 = sum of:
        0.049163908 = product of:
          0.098327816 = sum of:
            0.098327816 = weight(_text_:ii in 3640) [ClassicSimilarity], result of:
              0.098327816 = score(doc=3640,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.3580803 = fieldWeight in 3640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3640)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This study aims to identify different styles of personal digital information categorization based on the mindscape of the categorizers. To collect data, a questionnaire, a diary study, and 2 semistructured interviews were conducted with each of 18 participants. Then a content analysis was used to analyze the data. Based on the analysis of the data, this study identified 3 different types of categorizers: (i) rigid categorizers, (ii) fuzzy categorizers, and (iii) flexible categorizers. This study provides a unique way to understand personal information categorization by showing how it reflects the mindscapes of the categorizers. In particular, this study explains why people organize their personal information differently and have different tendencies in developing and maintaining their organizational structures. The findings provide insights on different ways of categorizing personal information and deepen our knowledge of categorization, personal information management, and information behavior. In practice, understanding different types of personal digital information categorization can make contributions to the development of systems, tools, and applications that support effective personal digital information categorization.
  19. Schöne neue Welt? : Fragen und Antworten: Wie Facebook menschliche Gedanken auslesen will (2017) 0.02
    0.024351284 = product of:
      0.048702568 = sum of:
        0.048702568 = product of:
          0.097405136 = sum of:
            0.097405136 = weight(_text_:22 in 2810) [ClassicSimilarity], result of:
              0.097405136 = score(doc=2810,freq=4.0), product of:
                0.1780192 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050836053 = queryNorm
                0.54716086 = fieldWeight in 2810, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2810)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2004 9:42:33
    22. 4.2017 11:58:05
  20. Wolchover, N.: Wie ein Aufsehen erregender Beweis kaum Beachtung fand (2017) 0.02
    0.024351284 = product of:
      0.048702568 = sum of:
        0.048702568 = product of:
          0.097405136 = sum of:
            0.097405136 = weight(_text_:22 in 3582) [ClassicSimilarity], result of:
              0.097405136 = score(doc=3582,freq=4.0), product of:
                0.1780192 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050836053 = queryNorm
                0.54716086 = fieldWeight in 3582, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3582)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 4.2017 10:42:05
    22. 4.2017 10:48:38

Languages

  • e 530
  • d 189
  • a 1
  • hu 1
  • More… Less…

Types

  • a 615
  • el 66
  • m 66
  • s 22
  • x 14
  • r 7
  • b 5
  • i 1
  • z 1
  • More… Less…

Themes

Subjects

Classifications