Search (818 results, page 1 of 41)

  • × year_i:[2020 TO 2030}
  1. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.17
    0.17117006 = product of:
      0.3423401 = sum of:
        0.047667053 = product of:
          0.14300115 = sum of:
            0.14300115 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.14300115 = score(doc=1000,freq=2.0), product of:
                0.30533072 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.036014426 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.14300115 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.14300115 = score(doc=1000,freq=2.0), product of:
            0.30533072 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036014426 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.008670762 = weight(_text_:information in 1000) [ClassicSimilarity], result of:
          0.008670762 = score(doc=1000,freq=4.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.13714671 = fieldWeight in 1000, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.14300115 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.14300115 = score(doc=1000,freq=2.0), product of:
            0.30533072 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036014426 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.5 = coord(4/8)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
    Imprint
    Wien / Library and Information Studies : Universität
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.15
    0.1501512 = product of:
      0.4004032 = sum of:
        0.057200458 = product of:
          0.17160137 = sum of:
            0.17160137 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.17160137 = score(doc=862,freq=2.0), product of:
                0.30533072 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.036014426 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.17160137 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.17160137 = score(doc=862,freq=2.0), product of:
            0.30533072 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036014426 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.17160137 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.17160137 = score(doc=862,freq=2.0), product of:
            0.30533072 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036014426 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.375 = coord(3/8)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  3. Bischoff, M.: ¬Der doppelte Einstein (2023) 0.06
    0.060482956 = product of:
      0.24193183 = sum of:
        0.22880183 = weight(_text_:mathematisches in 1063) [ClassicSimilarity], result of:
          0.22880183 = score(doc=1063,freq=2.0), product of:
            0.30533072 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036014426 = queryNorm
            0.7493574 = fieldWeight in 1063, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=1063)
        0.013130001 = product of:
          0.03939 = sum of:
            0.03939 = weight(_text_:29 in 1063) [ClassicSimilarity], result of:
              0.03939 = score(doc=1063,freq=2.0), product of:
                0.1266875 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036014426 = queryNorm
                0.31092256 = fieldWeight in 1063, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1063)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Date
    25. 9.2023 18:29:25
    Series
    Mathematisches Mosaik
  4. Leppla, C.; Wolf, A.H.: Auf dem Weg zu einem integrativen Modell der Informationskompetenzvermittlung (IMIK) : das ACRL Framework for Information Literacy for Higher Education und der aktivitäts- und eigenschaftsorientierte Datenlebenszyklus (2021) 0.05
    0.045766015 = product of:
      0.18306406 = sum of:
        0.010404914 = weight(_text_:information in 303) [ClassicSimilarity], result of:
          0.010404914 = score(doc=303,freq=4.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.16457605 = fieldWeight in 303, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=303)
        0.17265914 = weight(_text_:modell in 303) [ClassicSimilarity], result of:
          0.17265914 = score(doc=303,freq=8.0), product of:
            0.21656582 = queryWeight, product of:
              6.0133076 = idf(docFreq=293, maxDocs=44218)
              0.036014426 = queryNorm
            0.79725945 = fieldWeight in 303, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              6.0133076 = idf(docFreq=293, maxDocs=44218)
              0.046875 = fieldNorm(doc=303)
      0.25 = coord(2/8)
    
    Abstract
    Der Beitrag untersucht inwieweit das ACRL Framework for Information Literacy for Higher Education einen Beitrag zu einem integrativen Modell für die Vermittlung von Informations- und Datenkompetenzen leisten kann. Dazu werden die enthaltenen Kernelemente und Prinzipien vergleichend mit dem Modell des aktivitäts- und eigenschaftsorientierten Datenlebenszyklus betrachtet, der im Forschungsdatenmanagement und der Data Literacy den Forschungsprozess beschreibt. Basierend auf den Ergebnissen dieses Vergleichs werden die beiden Konzepte zu einem integrativen Modell der Informationskompetenzvermittlung (IMIK) synthetisiert, das durch den Datenlebenszyklus konkretisiert und ergänzt wird. Damit lassen sich Schulungsangebote zur Vermittlung von Informations- und Datenkompetenzen für verschiedene Zielgruppen und Niveaus ableiten, strukturieren, voneinander abgrenzen und den Agierenden aus den Fachwissenschaften und dem Infrastrukturbereich zuordnen.
  5. Albrecht, I.: GPT-3: die Zukunft studentischer Hausarbeiten oder eine Bedrohung der wissenschaftlichen Integrität? (2023) 0.03
    0.03298399 = product of:
      0.13193595 = sum of:
        0.122088455 = weight(_text_:modell in 881) [ClassicSimilarity], result of:
          0.122088455 = score(doc=881,freq=4.0), product of:
            0.21656582 = queryWeight, product of:
              6.0133076 = idf(docFreq=293, maxDocs=44218)
              0.036014426 = queryNorm
            0.5637476 = fieldWeight in 881, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.0133076 = idf(docFreq=293, maxDocs=44218)
              0.046875 = fieldNorm(doc=881)
        0.0098475 = product of:
          0.0295425 = sum of:
            0.0295425 = weight(_text_:29 in 881) [ClassicSimilarity], result of:
              0.0295425 = score(doc=881,freq=2.0), product of:
                0.1266875 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036014426 = queryNorm
                0.23319192 = fieldWeight in 881, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=881)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Abstract
    Mit dem Fortschritt künstlicher Intelligenzen sind auch progressive Sprachverarbeitungsmodelle auf den Markt gekommen. GPT-3 war nach seiner Veröffentlichung das leistungsstärkste Modell seiner Zeit. Das Programm generiert Texte, die oft nicht von menschlich verfassten Inhalten zu unterscheiden sind. GPT-3s Größe und Komplexität ermöglichen es, dass es wissenschaftliche Artikel eigenständig schreiben und ausgeben kann, die laut Studien und Untersuchungen zum Bestehen von Universitätskursen ausreichen. Mit der Entwicklung solcher Künstlichen Intelligenzen, insbesondere auf Open Source-Basis, könnten Studierende in Zukunft studentische Hausarbeiten von Textgeneratoren schreiben lassen. Diese Arbeit beschäftigt sich einerseits mit dem Modell GPT-3 und seinen Fähigkeiten sowie Risiken. Andererseits wird die Frage thematisiert, wie Hochschulen und Universitäten in Zukunft mit maschinell verfassten Arbeiten umgehen können.
    Date
    28. 1.2022 11:05:29
  6. Hauff-Hartig, S.: Wissensrepräsentation durch RDF: Drei angewandte Forschungsbeispiele : Bitte recht vielfältig: Wie Wissensgraphen, Disco und FaBiO Struktur in Mangas und die Humanities bringen (2021) 0.03
    0.032029495 = product of:
      0.12811798 = sum of:
        0.1151061 = weight(_text_:modell in 318) [ClassicSimilarity], result of:
          0.1151061 = score(doc=318,freq=2.0), product of:
            0.21656582 = queryWeight, product of:
              6.0133076 = idf(docFreq=293, maxDocs=44218)
              0.036014426 = queryNorm
            0.5315063 = fieldWeight in 318, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0133076 = idf(docFreq=293, maxDocs=44218)
              0.0625 = fieldNorm(doc=318)
        0.013011887 = product of:
          0.03903566 = sum of:
            0.03903566 = weight(_text_:22 in 318) [ClassicSimilarity], result of:
              0.03903566 = score(doc=318,freq=2.0), product of:
                0.12611638 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036014426 = queryNorm
                0.30952093 = fieldWeight in 318, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=318)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Abstract
    In der Session "Knowledge Representation" auf der ISI 2021 wurden unter der Moderation von Jürgen Reischer (Uni Regensburg) drei Projekte vorgestellt, in denen Knowledge Representation mit RDF umgesetzt wird. Die Domänen sind erfreulich unterschiedlich, die gemeinsame Klammer indes ist die Absicht, den Zugang zu Forschungsdaten zu verbessern: - Japanese Visual Media Graph - Taxonomy of Digital Research Activities in the Humanities - Forschungsdaten im konzeptuellen Modell von FRBR
    Date
    22. 5.2021 12:43:05
  7. Bergman, O.; Israeli, T.; Whittaker, S.: Factors hindering shared files retrieval (2020) 0.03
    0.029778628 = product of:
      0.079409674 = sum of:
        0.013709677 = weight(_text_:information in 5843) [ClassicSimilarity], result of:
          0.013709677 = score(doc=5843,freq=10.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.21684799 = fieldWeight in 5843, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5843)
        0.057567567 = weight(_text_:retrieval in 5843) [ClassicSimilarity], result of:
          0.057567567 = score(doc=5843,freq=20.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.5284309 = fieldWeight in 5843, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5843)
        0.00813243 = product of:
          0.024397288 = sum of:
            0.024397288 = weight(_text_:22 in 5843) [ClassicSimilarity], result of:
              0.024397288 = score(doc=5843,freq=2.0), product of:
                0.12611638 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036014426 = queryNorm
                0.19345059 = fieldWeight in 5843, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5843)
          0.33333334 = coord(1/3)
      0.375 = coord(3/8)
    
    Abstract
    Purpose Personal information management (PIM) is an activity in which people store information items in order to retrieve them later. The purpose of this paper is to test and quantify the effect of factors related to collection size, file properties and workload on file retrieval success and efficiency. Design/methodology/approach In the study, 289 participants retrieved 1,557 of their shared files in a naturalistic setting. The study used specially developed software designed to collect shared files' names and present them as targets for the retrieval task. The dependent variables were retrieval success, retrieval time and misstep/s. Findings Various factors compromise shared files retrieval including: collection size (large number of files), file properties (multiple versions, size of team sharing the file, time since most recent retrieval and folder depth) and workload (daily e-mails sent and received). The authors discuss theoretical reasons for these negative effects and suggest possible ways to overcome them. Originality/value Retrieval is the main reason people manage personal information. It is essential for retrieval to be successful and efficient, as information cannot be used unless it can be re-accessed. Prior PIM research has assumed that factors related to collection size, file properties and workload affect file retrieval. However, this is the first study to systematically quantify the negative effects of these factors. As each of these factors is expected to be exacerbated in the future, this study is a necessary first step toward addressing these problems.
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 72(2020) no.1, S.130-147
  8. Bischoff, M.: Hobby-Mathematiker findet die lang ersehnte Einstein-Kachel : Mathematisches Mosaik (2023) 0.03
    0.028600229 = product of:
      0.22880183 = sum of:
        0.22880183 = weight(_text_:mathematisches in 936) [ClassicSimilarity], result of:
          0.22880183 = score(doc=936,freq=2.0), product of:
            0.30533072 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036014426 = queryNorm
            0.7493574 = fieldWeight in 936, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=936)
      0.125 = coord(1/8)
    
  9. Qi, Q.; Hessen, D.J.; Heijden, P.G.M. van der: Improving information retrieval through correspondenceanalysis instead of latent semantic analysis (2023) 0.03
    0.026246186 = product of:
      0.06998983 = sum of:
        0.016451614 = weight(_text_:information in 1045) [ClassicSimilarity], result of:
          0.016451614 = score(doc=1045,freq=10.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.2602176 = fieldWeight in 1045, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1045)
        0.043690715 = weight(_text_:retrieval in 1045) [ClassicSimilarity], result of:
          0.043690715 = score(doc=1045,freq=8.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.40105087 = fieldWeight in 1045, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=1045)
        0.0098475 = product of:
          0.0295425 = sum of:
            0.0295425 = weight(_text_:29 in 1045) [ClassicSimilarity], result of:
              0.0295425 = score(doc=1045,freq=2.0), product of:
                0.1266875 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036014426 = queryNorm
                0.23319192 = fieldWeight in 1045, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1045)
          0.33333334 = coord(1/3)
      0.375 = coord(3/8)
    
    Abstract
    The initial dimensions extracted by latent semantic analysis (LSA) of a document-term matrixhave been shown to mainly display marginal effects, which are irrelevant for informationretrieval. To improve the performance of LSA, usually the elements of the raw document-term matrix are weighted and the weighting exponent of singular values can be adjusted.An alternative information retrieval technique that ignores the marginal effects is correspon-dence analysis (CA). In this paper, the information retrieval performance of LSA and CA isempirically compared. Moreover, it is explored whether the two weightings also improve theperformance of CA. The results for four empirical datasets show that CA always performsbetter than LSA. Weighting the elements of the raw data matrix can improve CA; however,it is data dependent and the improvement is small. Adjusting the singular value weightingexponent often improves the performance of CA; however, the extent of the improvementdepends on the dataset and the number of dimensions. (PDF) Improving information retrieval through correspondence analysis instead of latent semantic analysis.
    Date
    15. 9.2023 12:28:29
    Source
    Journal of intelligent information systems [https://doi.org/10.1007/s10844-023-00815-y]
  10. Was ist GPT-3 und spricht das Modell Deutsch? (2022) 0.02
    0.024921203 = product of:
      0.19936962 = sum of:
        0.19936962 = weight(_text_:modell in 868) [ClassicSimilarity], result of:
          0.19936962 = score(doc=868,freq=6.0), product of:
            0.21656582 = queryWeight, product of:
              6.0133076 = idf(docFreq=293, maxDocs=44218)
              0.036014426 = queryNorm
            0.920596 = fieldWeight in 868, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.0133076 = idf(docFreq=293, maxDocs=44218)
              0.0625 = fieldNorm(doc=868)
      0.125 = coord(1/8)
    
    Abstract
    GPT-3 ist ein Sprachverarbeitungsmodell der amerikanischen Non-Profit-Organisation OpenAI. Es verwendet Deep-Learning um Texte zu erstellen, zusammenzufassen, zu vereinfachen oder zu übersetzen.  GPT-3 macht seit der Veröffentlichung eines Forschungspapiers wiederholt Schlagzeilen. Mehrere Zeitungen und Online-Publikationen testeten die Fähigkeiten und veröffentlichten ganze Artikel - verfasst vom KI-Modell - darunter The Guardian und Hacker News. Es wird von Journalisten rund um den Globus wahlweise als "Sprachtalent", "allgemeine künstliche Intelligenz" oder "eloquent" bezeichnet. Grund genug, die Fähigkeiten des künstlichen Sprachgenies unter die Lupe zu nehmen.
    Source
    https://www.lernen-wie-maschinen.ai/ki-pedia/was-ist-gpt-3-und-spricht-das-modell-deutsch/
  11. Huvila, I.: Making and taking information (2022) 0.02
    0.0229209 = product of:
      0.0611224 = sum of:
        0.029429542 = weight(_text_:information in 527) [ClassicSimilarity], result of:
          0.029429542 = score(doc=527,freq=32.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.46549138 = fieldWeight in 527, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=527)
        0.021845357 = weight(_text_:retrieval in 527) [ClassicSimilarity], result of:
          0.021845357 = score(doc=527,freq=2.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.20052543 = fieldWeight in 527, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=527)
        0.0098475 = product of:
          0.0295425 = sum of:
            0.0295425 = weight(_text_:29 in 527) [ClassicSimilarity], result of:
              0.0295425 = score(doc=527,freq=2.0), product of:
                0.1266875 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036014426 = queryNorm
                0.23319192 = fieldWeight in 527, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=527)
          0.33333334 = coord(1/3)
      0.375 = coord(3/8)
    
    Abstract
    Information behavior theory covers different aspects of the totality of information-related human behavior rather unevenly. The transitions or trading zones between different types of information activities have remained perhaps especially under-theorized. This article interrogates and expands a conceptual apparatus of information making and information taking as a pair of substantial concepts for explaining, in part, the mobility of information in terms of doing that unfolds as a process of becoming rather than of being, and in part, what is happening when information comes into being and when something is taken up for use as information. Besides providing an apparatus to describe the nexus of information provision and acquisition, a closer consideration of the parallel doings opens opportunities to enrich the inquiry of the conditions and practice of information seeking, appropriation, discovery, and retrieval as modes taking, and learning and information use as its posterities.
    Date
    10. 3.2022 14:10:29
    Series
    JASIS&Tspecial issue on information behavior and information practices theory
    Source
    Journal of the Association for Information Science and Technology. 73(2022) no.4, S.528-541
    Theme
    Information
  12. Greifeneder, E.; Schlebbe, K.: Information Behaviour (2023) 0.02
    0.022583693 = product of:
      0.09033477 = sum of:
        0.018393463 = weight(_text_:information in 812) [ClassicSimilarity], result of:
          0.018393463 = score(doc=812,freq=18.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.2909321 = fieldWeight in 812, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=812)
        0.07194131 = weight(_text_:modell in 812) [ClassicSimilarity], result of:
          0.07194131 = score(doc=812,freq=2.0), product of:
            0.21656582 = queryWeight, product of:
              6.0133076 = idf(docFreq=293, maxDocs=44218)
              0.036014426 = queryNorm
            0.33219144 = fieldWeight in 812, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0133076 = idf(docFreq=293, maxDocs=44218)
              0.0390625 = fieldNorm(doc=812)
      0.25 = coord(2/8)
    
    Abstract
    Information Behaviour (IB) bezeichnet die verschiedenen Formen der Interaktionen von Menschen und Information. Der Begriff wird außerdem auch als Oberbegriff für das Forschungsfeld verwendet, das Informationsverhalten erhebt, analysiert und interpretiert. Als Drittes wird der Begriff für eine auf das Individuum fokussierte Forschungsperspektive innerhalb der Information-Behaviour-Forschung verwendet. Wie dieser Artikel zeigt, ist Information Behaviour ein verhältnismäßig junges Forschungsfeld, das in einer digitalisierten Welt eine wichtige Rolle einnimmt. Der Beitrag zeigt aber auch, dass viele Themen noch wenig oder nicht ausreichend beforscht und viele Begriffe nicht eindeutig definiert sind (Savolainen 2021). Daher liegt hier der Schwerpunkt auf einer Darstellung der Vielfalt der Begriffsdefinitionen, Theorien und Modelle der Informationsverhaltensforschung. Im deutschsprachigen Raum gibt es die heutige IB-Forschung als Forschungsfeld erst seit etwa 20 Jahren und damit erst 30 Jahre später als in den Information-Behaviour-Hochburgen wie den USA, Schweden, Dänemark, Finnland, Norwegen oder Kanada. Davor dominierte im deutschsprachigen Raum die Benutzer*innenforschung, welche sich insbesondere in Studien zur Zufriedenheit und zur Nutzung von Bibliotheken und Museen manifestierte. Dieser Artikel definiert zuerst den Begriff Information Behaviour und stellt ein generalisierendes Modell der Informationsverhaltensforschung vor. Im Anschluss werden Formen der Interaktion mit Information und zentrale Entwicklungen des Forschungsfeldes beschrieben. Der Beitrag endet mit einem Einblick in die Information Behaviour Community.
  13. Hartel, J.: ¬The red thread of information (2020) 0.02
    0.020158593 = product of:
      0.053756244 = sum of:
        0.027419355 = weight(_text_:information in 5839) [ClassicSimilarity], result of:
          0.027419355 = score(doc=5839,freq=40.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.43369597 = fieldWeight in 5839, product of:
              6.3245554 = tf(freq=40.0), with freq of:
                40.0 = termFreq=40.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5839)
        0.018204464 = weight(_text_:retrieval in 5839) [ClassicSimilarity], result of:
          0.018204464 = score(doc=5839,freq=2.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.16710453 = fieldWeight in 5839, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5839)
        0.00813243 = product of:
          0.024397288 = sum of:
            0.024397288 = weight(_text_:22 in 5839) [ClassicSimilarity], result of:
              0.024397288 = score(doc=5839,freq=2.0), product of:
                0.12611638 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036014426 = queryNorm
                0.19345059 = fieldWeight in 5839, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5839)
          0.33333334 = coord(1/3)
      0.375 = coord(3/8)
    
    Abstract
    Purpose In The Invisible Substrate of Information Science, a landmark article about the discipline of information science, Marcia J. Bates wrote that ".we are always looking for the red thread of information in the social texture of people's lives" (1999a, p. 1048). To sharpen our understanding of information science and to elaborate Bates' idea, the work at hand answers the question: Just what does the red thread of information entail? Design/methodology/approach Through a close reading of Bates' oeuvre and by applying concepts from the reference literature of information science, nine composite entities that qualify as the red thread of information are identified, elaborated, and related to existing concepts in the information science literature. In the spirit of a scientist-poet (White, 1999), several playful metaphors related to the color red are employed. Findings Bates' red thread of information entails: terms, genres, literatures, classification systems, scholarly communication, information retrieval, information experience, information institutions, and information policy. This same constellation of phenomena can be found in resonant visions of information science, namely, domain analysis (Hjørland, 2002), ethnography of infrastructure (Star, 1999), and social epistemology (Shera, 1968). Research limitations/implications With the vital vermilion filament in clear view, newcomers can more easily engage the material, conceptual, and social machinery of information science, and specialists are reminded of what constitutes information science as a whole. Future researchers and scientist-poets may wish to supplement the nine composite entities with additional, emergent information phenomena. Originality/value Though the explication of information science that follows is relatively orthodox and time-bound, the paper offers an imaginative, accessible, yet technically precise way of understanding the field.
    Date
    30. 4.2020 21:03:22
    Theme
    Information
  14. Das, S.; Paik, J.H.: Gender tagging of named entities using retrieval-assisted multi-context aggregation : an unsupervised approach (2023) 0.02
    0.020023605 = product of:
      0.05339628 = sum of:
        0.012743366 = weight(_text_:information in 941) [ClassicSimilarity], result of:
          0.012743366 = score(doc=941,freq=6.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.20156369 = fieldWeight in 941, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=941)
        0.030894 = weight(_text_:retrieval in 941) [ClassicSimilarity], result of:
          0.030894 = score(doc=941,freq=4.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.2835858 = fieldWeight in 941, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=941)
        0.009758915 = product of:
          0.029276744 = sum of:
            0.029276744 = weight(_text_:22 in 941) [ClassicSimilarity], result of:
              0.029276744 = score(doc=941,freq=2.0), product of:
                0.12611638 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036014426 = queryNorm
                0.23214069 = fieldWeight in 941, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=941)
          0.33333334 = coord(1/3)
      0.375 = coord(3/8)
    
    Abstract
    Inferring the gender of named entities present in a text has several practical applications in information sciences. Existing approaches toward name gender identification rely exclusively on using the gender distributions from labeled data. In the absence of such labeled data, these methods fail. In this article, we propose a two-stage model that is able to infer the gender of names present in text without requiring explicit name-gender labels. We use coreference resolution as the backbone for our proposed model. To aid coreference resolution where the existing contextual information does not suffice, we use a retrieval-assisted context aggregation framework. We demonstrate that state-of-the-art name gender inference is possible without supervision. Our proposed method matches or outperforms several supervised approaches and commercially used methods on five English language datasets from different domains.
    Date
    22. 3.2023 12:00:14
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.4, S.461-475
  15. Strecker, D.: Dataset Retrieval : Informationsverhalten von Datensuchenden und das Ökosystem von Data-Retrieval-Systemen (2022) 0.02
    0.018735029 = product of:
      0.074940115 = sum of:
        0.009809847 = weight(_text_:information in 4021) [ClassicSimilarity], result of:
          0.009809847 = score(doc=4021,freq=2.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.1551638 = fieldWeight in 4021, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=4021)
        0.06513027 = weight(_text_:retrieval in 4021) [ClassicSimilarity], result of:
          0.06513027 = score(doc=4021,freq=10.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.59785134 = fieldWeight in 4021, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=4021)
      0.25 = coord(2/8)
    
    Abstract
    Verschiedene Stakeholder fordern eine bessere Verfügbarkeit von Forschungsdaten. Der Erfolg dieser Initiativen hängt wesentlich von einer guten Auffindbarkeit der publizierten Datensätze ab, weshalb Dataset Retrieval an Bedeutung gewinnt. Dataset Retrieval ist eine Sonderform von Information Retrieval, die sich mit dem Auffinden von Datensätzen befasst. Dieser Beitrag fasst aktuelle Forschungsergebnisse über das Informationsverhalten von Datensuchenden zusammen. Anschließend werden beispielhaft zwei Suchdienste verschiedener Ausrichtung vorgestellt und verglichen. Um darzulegen, wie diese Dienste ineinandergreifen, werden inhaltliche Überschneidungen von Datenbeständen genutzt, um den Metadatenaustausch zu analysieren.
  16. Fuhr, N.: Modelle im Information Retrieval (2023) 0.02
    0.018153027 = product of:
      0.048408072 = sum of:
        0.008670762 = weight(_text_:information in 800) [ClassicSimilarity], result of:
          0.008670762 = score(doc=800,freq=4.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.13714671 = fieldWeight in 800, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=800)
        0.03153106 = weight(_text_:retrieval in 800) [ClassicSimilarity], result of:
          0.03153106 = score(doc=800,freq=6.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.28943354 = fieldWeight in 800, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=800)
        0.008206251 = product of:
          0.024618752 = sum of:
            0.024618752 = weight(_text_:29 in 800) [ClassicSimilarity], result of:
              0.024618752 = score(doc=800,freq=2.0), product of:
                0.1266875 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036014426 = queryNorm
                0.19432661 = fieldWeight in 800, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=800)
          0.33333334 = coord(1/3)
      0.375 = coord(3/8)
    
    Abstract
    Information-Retrieval-Modelle -(IR-Modelle) spezifizieren, wie zu einer gegebenen Anfrage die Antwortdokumente aus einer Dokumentenkollektion bestimmt werden. Ausgangsbasis jedes Modells sind dabei zunächst bestimmte Annahmen über die Wissensrepräsentation (s. Teil B Methoden und Systeme der Inhaltserschließung) von Fragen und Dokumenten. Hier bezeichnen wir die Elemente dieser Repräsentationen als Terme, wobei es aus der Sicht des Modells egal ist, wie diese Terme aus dem Dokument (und analog aus der von Benutzenden eingegebenen Anfrage) abgeleitet werden: Bei Texten werden hierzu häufig computerlinguistische Methoden eingesetzt, aber auch komplexere automatische oder manuelle Erschließungsverfahren können zur Anwendung kommen. Repräsentationen besitzen ferner eine bestimmte Struktur. Ein Dokument wird meist als Menge oder Multimenge von Termen aufgefasst, wobei im zweiten Fall das Mehrfachvorkommen berücksichtigt wird. Diese Dokumentrepräsentation wird wiederum auf eine sogenannte Dokumentbeschreibung abgebildet, in der die einzelnen Terme gewichtet sein können. Im Folgenden unterscheiden wir nur zwischen ungewichteter (Gewicht eines Terms ist entweder 0 oder 1) und gewichteter Indexierung (das Gewicht ist eine nichtnegative reelle Zahl). Analog dazu gibt es eine Fragerepräsentation; legt man eine natürlichsprachige Anfrage zugrunde, so kann man die o. g. Verfahren für Dokumenttexte anwenden. Alternativ werden auch grafische oder formale Anfragesprachen verwendet, wobei aus Sicht der Modelle insbesondere deren logische Struktur (etwa beim Booleschen Retrieval) relevant ist. Die Fragerepräsentation wird dann in eine Fragebeschreibung überführt.
    Date
    24.11.2022 17:20:29
  17. Mäder, A.: Gute Theorien, schlechte Theorien (2020) 0.02
    0.017985327 = product of:
      0.14388262 = sum of:
        0.14388262 = weight(_text_:modell in 5794) [ClassicSimilarity], result of:
          0.14388262 = score(doc=5794,freq=2.0), product of:
            0.21656582 = queryWeight, product of:
              6.0133076 = idf(docFreq=293, maxDocs=44218)
              0.036014426 = queryNorm
            0.6643829 = fieldWeight in 5794, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0133076 = idf(docFreq=293, maxDocs=44218)
              0.078125 = fieldNorm(doc=5794)
      0.125 = coord(1/8)
    
    Abstract
    Naturforscher formulieren und überprüfen Hypothesen. Allerdings ist umstritten, nach welchen Kriterien das geschehen sollte - und wann es sich lohnt, ein Modell zu verwerfen oder zu akzeptieren.
  18. Hahn, S.: DarkBERT ist mit Daten aus dem Darknet trainiert : ChatGPTs dunkler Bruder? (2023) 0.02
    0.017985327 = product of:
      0.14388262 = sum of:
        0.14388262 = weight(_text_:modell in 979) [ClassicSimilarity], result of:
          0.14388262 = score(doc=979,freq=2.0), product of:
            0.21656582 = queryWeight, product of:
              6.0133076 = idf(docFreq=293, maxDocs=44218)
              0.036014426 = queryNorm
            0.6643829 = fieldWeight in 979, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0133076 = idf(docFreq=293, maxDocs=44218)
              0.078125 = fieldNorm(doc=979)
      0.125 = coord(1/8)
    
    Abstract
    Forscher haben ein KI-Modell entwickelt, das mit Daten aus dem Darknet trainiert ist - DarkBERTs Quelle sind Hacker, Cyberkriminelle, politisch Verfolgte.
  19. Wu, Z.; Lu, C.; Zhao, Y.; Xie, J.; Zou, D.; Su, X.: ¬The protection of user preference privacy in personalized information retrieval : challenges and overviews (2021) 0.02
    0.01720788 = product of:
      0.06883152 = sum of:
        0.017341524 = weight(_text_:information in 520) [ClassicSimilarity], result of:
          0.017341524 = score(doc=520,freq=16.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.27429342 = fieldWeight in 520, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=520)
        0.051489998 = weight(_text_:retrieval in 520) [ClassicSimilarity], result of:
          0.051489998 = score(doc=520,freq=16.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.47264296 = fieldWeight in 520, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=520)
      0.25 = coord(2/8)
    
    Abstract
    This paper reviews a large number of research achievements relevant to user privacy protection in an untrusted network environment, and then analyzes and evaluates their application limitations in personalized information retrieval, to establish the conditional constraints that an effective approach for user preference privacy protection in personalized information retrieval should meet, thus providing a basic reference for the solution of this problem. First, based on the basic framework of a personalized information retrieval platform, we establish a complete set of constraints for user preference privacy protection in terms of security, usability, efficiency, and accuracy. Then, we comprehensively review the technical features for all kinds of popular methods for user privacy protection, and analyze their application limitations in personalized information retrieval, according to the constraints of preference privacy protection. The results show that personalized information retrieval has higher requirements for users' privacy protection, i.e., it is required to comprehensively improve the security of users' preference privacy on the untrusted server-side, under the precondition of not changing the platform, algorithm, efficiency, and accuracy of personalized information retrieval. However, all kinds of existing privacy methods still cannot meet the above requirements. This paper is an important study attempt to the problem of user preference privacy protection of personalized information retrieval, which can provide a basic reference and direction for the further study of the problem.
  20. Aizawa, A.; Kohlhase, M.: Mathematical information retrieval (2021) 0.02
    0.017034933 = product of:
      0.06813973 = sum of:
        0.017167233 = weight(_text_:information in 667) [ClassicSimilarity], result of:
          0.017167233 = score(doc=667,freq=8.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.27153665 = fieldWeight in 667, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=667)
        0.0509725 = weight(_text_:retrieval in 667) [ClassicSimilarity], result of:
          0.0509725 = score(doc=667,freq=8.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.46789268 = fieldWeight in 667, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=667)
      0.25 = coord(2/8)
    
    Abstract
    We present an overview of the NTCIR Math Tasks organized during NTCIR-10, 11, and 12. These tasks are primarily dedicated to techniques for searching mathematical content with formula expressions. In this chapter, we first summarize the task design and introduce test collections generated in the tasks. We also describe the features and main challenges of mathematical information retrieval systems and discuss future perspectives in the field.
    Series
    ¬The Information retrieval series, vol 43
    Source
    Evaluating information retrieval and access tasks. Eds.: Sakai, T., Oard, D., Kando, N. [https://doi.org/10.1007/978-981-15-5554-1_12]

Languages

  • e 664
  • d 148
  • pt 3
  • m 2
  • sp 1
  • More… Less…

Types

  • a 764
  • el 88
  • m 27
  • p 8
  • s 6
  • A 1
  • EL 1
  • x 1
  • More… Less…

Themes

Subjects

Classifications