Search (235 results, page 1 of 12)

  • × year_i:[2020 TO 2030}
  1. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.12
    0.120903075 = product of:
      0.24180615 = sum of:
        0.060451537 = product of:
          0.18135461 = sum of:
            0.18135461 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.18135461 = score(doc=862,freq=2.0), product of:
                0.32268468 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.038061365 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.18135461 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.18135461 = score(doc=862,freq=2.0), product of:
            0.32268468 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038061365 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.5 = coord(2/4)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  2. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.10
    0.10075256 = product of:
      0.20150512 = sum of:
        0.05037628 = product of:
          0.15112884 = sum of:
            0.15112884 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.15112884 = score(doc=1000,freq=2.0), product of:
                0.32268468 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.038061365 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.15112884 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.15112884 = score(doc=1000,freq=2.0), product of:
            0.32268468 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038061365 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.5 = coord(2/4)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  3. Oesterlund, C.; Jarrahi, M.H.; Willis, M.; Boyd, K.; Wolf, C.T.: Artificial intelligence and the world of work : a co-constitutive relationship (2021) 0.03
    0.033665657 = product of:
      0.067331314 = sum of:
        0.010717705 = product of:
          0.032153115 = sum of:
            0.032153115 = weight(_text_:k in 5504) [ClassicSimilarity], result of:
              0.032153115 = score(doc=5504,freq=2.0), product of:
                0.13587062 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.038061365 = queryNorm
                0.23664509 = fieldWeight in 5504, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5504)
          0.33333334 = coord(1/3)
        0.056613613 = product of:
          0.113227226 = sum of:
            0.113227226 = weight(_text_:intelligent in 5504) [ClassicSimilarity], result of:
              0.113227226 = score(doc=5504,freq=4.0), product of:
                0.21440355 = queryWeight, product of:
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.038061365 = queryNorm
                0.5281033 = fieldWeight in 5504, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5504)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The use of intelligent machines-digital technologies that feature data-driven forms of customization, learning, and autonomous action-is rapidly growing and will continue to impact many industries and domains. This is consequential for communities of researchers, educators, and practitioners concerned with studying, supporting, and educating information professionals. In the face of new developments in artificial intelligence (AI), the research community faces 3 questions: (a) How is AI becoming part of the world of work? (b) How is the world of work becoming part of AI? and (c) How can the information community help address this topic of Work in the Age of Intelligent Machines (WAIM)? This opinion piece considers these 3 questions by drawing on discussion from an engaging 2019 iConference workshop organized by the NSF supported WAIM research coordination network (note: https://waim.network).
  4. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.03
    0.029653851 = product of:
      0.059307702 = sum of:
        0.05037628 = product of:
          0.15112884 = sum of:
            0.15112884 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.15112884 = score(doc=5669,freq=2.0), product of:
                0.32268468 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.038061365 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
        0.008931421 = product of:
          0.026794262 = sum of:
            0.026794262 = weight(_text_:k in 5669) [ClassicSimilarity], result of:
              0.026794262 = score(doc=5669,freq=2.0), product of:
                0.13587062 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.038061365 = queryNorm
                0.19720423 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  5. Soshnikov, D.: ROMEO: an ontology-based multi-agent architecture for online information retrieval (2021) 0.03
    0.026687913 = product of:
      0.10675165 = sum of:
        0.10675165 = product of:
          0.2135033 = sum of:
            0.2135033 = weight(_text_:intelligent in 249) [ClassicSimilarity], result of:
              0.2135033 = score(doc=249,freq=8.0), product of:
                0.21440355 = queryWeight, product of:
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.038061365 = queryNorm
                0.99580115 = fieldWeight in 249, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.0625 = fieldNorm(doc=249)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This paper describes an approach to path-finding in the intelligent graphs, with vertices being intelligent agents. A possible implementation of this approach is described, based on logical inference in distributed frame hierarchy. Presented approach can be used for implementing distributed intelligent information systems that include automatic navigation and path generation in hypertext, which can be used, for example in distance education, as well as for organizing intelligent web catalogues with flexible ontology-based information retrieval.
  6. Vogt, T.: ¬Die Transformation des renommierten Informationsservices zbMATH zu einer Open Access-Plattform für die Mathematik steht vor dem Abschluss. (2020) 0.03
    0.026655968 = product of:
      0.10662387 = sum of:
        0.10662387 = product of:
          0.3198716 = sum of:
            0.3198716 = weight(_text_:c3 in 31) [ClassicSimilarity], result of:
              0.3198716 = score(doc=31,freq=2.0), product of:
                0.37113553 = queryWeight, product of:
                  9.7509775 = idf(docFreq=6, maxDocs=44218)
                  0.038061365 = queryNorm
                0.8618728 = fieldWeight in 31, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  9.7509775 = idf(docFreq=6, maxDocs=44218)
                  0.0625 = fieldNorm(doc=31)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Content
    "Mit Beginn des Jahres 2021 wird der umfassende internationale Informationsservice zbMATH in eine Open Access-Plattform überführt. Dann steht dieser bislang kostenpflichtige Dienst weltweit allen Interessierten kostenfrei zur Verfügung. Die Änderung des Geschäftsmodells ermöglicht, die meisten Informationen und Daten von zbMATH für Forschungszwecke und zur Verknüpfung mit anderen nicht-kommerziellen Diensten frei zu nutzen, siehe: https://www.mathematik.de/dmv-blog/2772-transformation-von-zbmath-zu-einer-open-access-plattform-f%C3%BCr-die-mathematik-kurz-vor-dem-abschluss."
  7. Bärnreuther, K.: Informationskompetenz-Vermittlung für Schulklassen mit Wikipedia und dem Framework Informationskompetenz in der Hochschulbildung (2021) 0.01
    0.013094037 = product of:
      0.026188074 = sum of:
        0.010717705 = product of:
          0.032153115 = sum of:
            0.032153115 = weight(_text_:k in 299) [ClassicSimilarity], result of:
              0.032153115 = score(doc=299,freq=2.0), product of:
                0.13587062 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.038061365 = queryNorm
                0.23664509 = fieldWeight in 299, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.046875 = fieldNorm(doc=299)
          0.33333334 = coord(1/3)
        0.015470369 = product of:
          0.030940738 = sum of:
            0.030940738 = weight(_text_:22 in 299) [ClassicSimilarity], result of:
              0.030940738 = score(doc=299,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.23214069 = fieldWeight in 299, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=299)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    o-bib: Das offene Bibliotheksjournal. 8(2021) Nr.2, S.1-22
  8. Gladun, A.; Rogushina, J.: Development of domain thesaurus as a set of ontology concepts with use of semantic similarity and elements of combinatorial optimization (2021) 0.01
    0.011675962 = product of:
      0.04670385 = sum of:
        0.04670385 = product of:
          0.0934077 = sum of:
            0.0934077 = weight(_text_:intelligent in 572) [ClassicSimilarity], result of:
              0.0934077 = score(doc=572,freq=2.0), product of:
                0.21440355 = queryWeight, product of:
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.038061365 = queryNorm
                0.435663 = fieldWeight in 572, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=572)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    We consider use of ontological background knowledge in intelligent information systems and analyze directions of their reduction in compliance with specifics of particular user task. Such reduction is aimed at simplification of knowledge processing without loss of significant information. We propose methods of generation of task thesauri based on domain ontology that contain such subset of ontological concepts and relations that can be used in task solving. Combinatorial optimization is used for minimization of task thesaurus. In this approach, semantic similarity estimates are used for determination of concept significance for user task. Some practical examples of optimized thesauri application for semantic retrieval and competence analysis demonstrate efficiency of proposed approach.
  9. Ekstrand, M.D.; Wright, K.L.; Pera, M.S.: Enhancing classroom instruction with online news (2020) 0.01
    0.0109116975 = product of:
      0.021823395 = sum of:
        0.008931421 = product of:
          0.026794262 = sum of:
            0.026794262 = weight(_text_:k in 5844) [ClassicSimilarity], result of:
              0.026794262 = score(doc=5844,freq=2.0), product of:
                0.13587062 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.038061365 = queryNorm
                0.19720423 = fieldWeight in 5844, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5844)
          0.33333334 = coord(1/3)
        0.012891974 = product of:
          0.025783949 = sum of:
            0.025783949 = weight(_text_:22 in 5844) [ClassicSimilarity], result of:
              0.025783949 = score(doc=5844,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.19345059 = fieldWeight in 5844, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5844)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose This paper investigates how school teachers look for informational texts for their classrooms. Access to current, varied and authentic informational texts improves learning outcomes for K-12 students, but many teachers lack resources to expand and update readings. The Web offers freely available resources, but finding suitable ones is time-consuming. This research lays the groundwork for building tools to ease that burden. Design/methodology/approach This paper reports qualitative findings from a study in two stages: (1) a set of semistructured interviews, based on the critical incident technique, eliciting teachers' information-seeking practices and challenges; and (2) observations of teachers using a prototype teaching-oriented news search tool under a think-aloud protocol. Findings Teachers articulated different objectives and ways of using readings in their classrooms, goals and self-reported practices varied by experience level. Teachers struggled to formulate queries that are likely to return readings on specific course topics, instead searching directly for abstract topics. Experience differences did not translate into observable differences in search skill or success in the lab study. Originality/value There is limited work on teachers' information-seeking practices, particularly on how teachers look for texts for classroom use. This paper describes how teachers look for information in this context, setting the stage for future development and research on how to support this use case. Understanding and supporting teachers looking for information is a rich area for future research, due to the complexity of the information need and the fact that teachers are not looking for information for themselves.
    Date
    20. 1.2015 18:30:22
  10. Zhang, L.; Lu, W.; Yang, J.: LAGOS-AND : a large gold standard dataset for scholarly author name disambiguation (2023) 0.01
    0.0109116975 = product of:
      0.021823395 = sum of:
        0.008931421 = product of:
          0.026794262 = sum of:
            0.026794262 = weight(_text_:k in 883) [ClassicSimilarity], result of:
              0.026794262 = score(doc=883,freq=2.0), product of:
                0.13587062 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.038061365 = queryNorm
                0.19720423 = fieldWeight in 883, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=883)
          0.33333334 = coord(1/3)
        0.012891974 = product of:
          0.025783949 = sum of:
            0.025783949 = weight(_text_:22 in 883) [ClassicSimilarity], result of:
              0.025783949 = score(doc=883,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.19345059 = fieldWeight in 883, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=883)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In this article, we present a method to automatically build large labeled datasets for the author ambiguity problem in the academic world by leveraging the authoritative academic resources, ORCID and DOI. Using the method, we built LAGOS-AND, two large, gold-standard sub-datasets for author name disambiguation (AND), of which LAGOS-AND-BLOCK is created for clustering-based AND research and LAGOS-AND-PAIRWISE is created for classification-based AND research. Our LAGOS-AND datasets are substantially different from the existing ones. The initial versions of the datasets (v1.0, released in February 2021) include 7.5 M citations authored by 798 K unique authors (LAGOS-AND-BLOCK) and close to 1 M instances (LAGOS-AND-PAIRWISE). And both datasets show close similarities to the whole Microsoft Academic Graph (MAG) across validations of six facets. In building the datasets, we reveal the variation degrees of last names in three literature databases, PubMed, MAG, and Semantic Scholar, by comparing author names hosted to the authors' official last names shown on the ORCID pages. Furthermore, we evaluate several baseline disambiguation methods as well as the MAG's author IDs system on our datasets, and the evaluation helps identify several interesting findings. We hope the datasets and findings will bring new insights for future studies. The code and datasets are publicly available.
    Date
    22. 1.2023 18:40:36
  11. Thelwall, M.; Kousha, K.; Abdoli, M.; Stuart, E.; Makita, M.; Wilson, P.; Levitt, J.: Why are coauthored academic articles more cited : higher quality or larger audience? (2023) 0.01
    0.0109116975 = product of:
      0.021823395 = sum of:
        0.008931421 = product of:
          0.026794262 = sum of:
            0.026794262 = weight(_text_:k in 995) [ClassicSimilarity], result of:
              0.026794262 = score(doc=995,freq=2.0), product of:
                0.13587062 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.038061365 = queryNorm
                0.19720423 = fieldWeight in 995, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=995)
          0.33333334 = coord(1/3)
        0.012891974 = product of:
          0.025783949 = sum of:
            0.025783949 = weight(_text_:22 in 995) [ClassicSimilarity], result of:
              0.025783949 = score(doc=995,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.19345059 = fieldWeight in 995, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=995)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    22. 6.2023 18:11:50
  12. Vakkari, P.; Järvelin, K.; Chang, Y.-W.: ¬The association of disciplinary background with the evolution of topics and methods in Library and Information Science research 1995-2015 (2023) 0.01
    0.0109116975 = product of:
      0.021823395 = sum of:
        0.008931421 = product of:
          0.026794262 = sum of:
            0.026794262 = weight(_text_:k in 998) [ClassicSimilarity], result of:
              0.026794262 = score(doc=998,freq=2.0), product of:
                0.13587062 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.038061365 = queryNorm
                0.19720423 = fieldWeight in 998, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=998)
          0.33333334 = coord(1/3)
        0.012891974 = product of:
          0.025783949 = sum of:
            0.025783949 = weight(_text_:22 in 998) [ClassicSimilarity], result of:
              0.025783949 = score(doc=998,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.19345059 = fieldWeight in 998, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=998)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    22. 6.2023 18:15:06
  13. Qi, Q.; Hessen, D.J.; Heijden, P.G.M. van der: Improving information retrieval through correspondenceanalysis instead of latent semantic analysis (2023) 0.01
    0.010007967 = product of:
      0.04003187 = sum of:
        0.04003187 = product of:
          0.08006374 = sum of:
            0.08006374 = weight(_text_:intelligent in 1045) [ClassicSimilarity], result of:
              0.08006374 = score(doc=1045,freq=2.0), product of:
                0.21440355 = queryWeight, product of:
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.038061365 = queryNorm
                0.37342542 = fieldWeight in 1045, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1045)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Journal of intelligent information systems [https://doi.org/10.1007/s10844-023-00815-y]
  14. ¬Der Student aus dem Computer (2023) 0.01
    0.009024382 = product of:
      0.036097527 = sum of:
        0.036097527 = product of:
          0.07219505 = sum of:
            0.07219505 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.07219505 = score(doc=1079,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    27. 1.2023 16:22:55
  15. Giesselbach, S.; Estler-Ziegler, T.: Dokumente schneller analysieren mit Künstlicher Intelligenz (2021) 0.01
    0.008339973 = product of:
      0.033359893 = sum of:
        0.033359893 = product of:
          0.066719785 = sum of:
            0.066719785 = weight(_text_:intelligent in 128) [ClassicSimilarity], result of:
              0.066719785 = score(doc=128,freq=2.0), product of:
                0.21440355 = queryWeight, product of:
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.038061365 = queryNorm
                0.31118786 = fieldWeight in 128, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=128)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Künstliche Intelligenz (KI) und natürliches Sprachverstehen (natural language understanding/NLU) verändern viele Aspekte unseres Alltags und unserer Arbeitsweise. Besondere Prominenz erlangte NLU durch Sprachassistenten wie Siri, Alexa und Google Now. NLU bietet Firmen und Einrichtungen das Potential, Prozesse effizienter zu gestalten und Mehrwert aus textuellen Inhalten zu schöpfen. So sind NLU-Lösungen in der Lage, komplexe, unstrukturierte Dokumente inhaltlich zu erschließen. Für die semantische Textanalyse hat das NLU-Team des IAIS Sprachmodelle entwickelt, die mit Deep-Learning-Verfahren trainiert werden. Die NLU-Suite analysiert Dokumente, extrahiert Eckdaten und erstellt bei Bedarf sogar eine strukturierte Zusammenfassung. Mit diesen Ergebnissen, aber auch über den Inhalt der Dokumente selbst, lassen sich Dokumente vergleichen oder Texte mit ähnlichen Informationen finden. KI-basierten Sprachmodelle sind der klassischen Verschlagwortung deutlich überlegen. Denn sie finden nicht nur Texte mit vordefinierten Schlagwörtern, sondern suchen intelligent nach Begriffen, die in ähnlichem Zusammenhang auftauchen oder als Synonym gebraucht werden. Der Vortrag liefert eine Einordnung der Begriffe "Künstliche Intelligenz" und "Natural Language Understanding" und zeigt Möglichkeiten, Grenzen, aktuelle Forschungsrichtungen und Methoden auf. Anhand von Praxisbeispielen wird anschließend demonstriert, wie NLU zur automatisierten Belegverarbeitung, zur Katalogisierung von großen Datenbeständen wie Nachrichten und Patenten und zur automatisierten thematischen Gruppierung von Social Media Beiträgen und Publikationen genutzt werden kann.
  16. Jha, A.: Why GPT-4 isn't all it's cracked up to be (2023) 0.01
    0.008256152 = product of:
      0.03302461 = sum of:
        0.03302461 = product of:
          0.06604922 = sum of:
            0.06604922 = weight(_text_:intelligent in 923) [ClassicSimilarity], result of:
              0.06604922 = score(doc=923,freq=4.0), product of:
                0.21440355 = queryWeight, product of:
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.038061365 = queryNorm
                0.30806026 = fieldWeight in 923, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=923)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    "I still don't know what to think about GPT-4, the new large language model (LLM) from OpenAI. On the one hand it is a remarkable product that easily passes the Turing test. If you ask it questions, via the ChatGPT interface, GPT-4 can easily produce fluid sentences largely indistinguishable from those a person might write. But on the other hand, amid the exceptional levels of hype and anticipation, it's hard to know where GPT-4 and other LLMs truly fit in the larger project of making machines intelligent.
    They might appear intelligent, but LLMs are nothing of the sort. They don't understand the meanings of the words they are using, nor the concepts expressed within the sentences they create. When asked how to bring a cow back to life, earlier versions of ChatGPT, for example, which ran on a souped-up version of GPT-3, would confidently provide a list of instructions. So-called hallucinations like this happen because language models have no concept of what a "cow" is or that "death" is a non-reversible state of being. LLMs do not have minds that can think about objects in the world and how they relate to each other. All they "know" is how likely it is that some sets of words will follow other sets of words, having calculated those probabilities from their training data. To make sense of all this, I spoke with Gary Marcus, an emeritus professor of psychology and neural science at New York University, for "Babbage", our science and technology podcast. Last year, as the world was transfixed by the sudden appearance of ChatGPT, he made some fascinating predictions about GPT-4.
  17. Jaeger, L.: Wissenschaftler versus Wissenschaft (2020) 0.01
    0.0077351844 = product of:
      0.030940738 = sum of:
        0.030940738 = product of:
          0.061881475 = sum of:
            0.061881475 = weight(_text_:22 in 4156) [ClassicSimilarity], result of:
              0.061881475 = score(doc=4156,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.46428138 = fieldWeight in 4156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4156)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    2. 3.2020 14:08:22
  18. Ibrahim, G.M.; Taylor, M.: Krebszellen manipulieren Neurone : Gliome (2023) 0.01
    0.0077351844 = product of:
      0.030940738 = sum of:
        0.030940738 = product of:
          0.061881475 = sum of:
            0.061881475 = weight(_text_:22 in 1203) [ClassicSimilarity], result of:
              0.061881475 = score(doc=1203,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.46428138 = fieldWeight in 1203, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1203)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Spektrum der Wissenschaft. 2023, H.10, S.22-24
  19. Singh, A.; Sinha, U.; Sharma, D.k.: Semantic Web and data visualization (2020) 0.01
    0.006671978 = product of:
      0.026687913 = sum of:
        0.026687913 = product of:
          0.053375825 = sum of:
            0.053375825 = weight(_text_:intelligent in 79) [ClassicSimilarity], result of:
              0.053375825 = score(doc=79,freq=2.0), product of:
                0.21440355 = queryWeight, product of:
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.038061365 = queryNorm
                0.24895029 = fieldWeight in 79, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.03125 = fieldNorm(doc=79)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    With the terrific growth of data volume and data being produced every second on millions of devices across the globe, there is a desperate need to manage the unstructured data available on web pages efficiently. Semantic Web or also known as Web of Trust structures the scattered data on the Internet according to the needs of the user. It is an extension of the World Wide Web (WWW) which focuses on manipulating web data on behalf of Humans. Due to the ability of the Semantic Web to integrate data from disparate sources and hence makes it more user-friendly, it is an emerging trend. Tim Berners-Lee first introduced the term Semantic Web and since then it has come a long way to become a more intelligent and intuitive web. Data Visualization plays an essential role in explaining complex concepts in a universal manner through pictorial representation, and the Semantic Web helps in broadening the potential of Data Visualization and thus making it an appropriate combination. The objective of this chapter is to provide fundamental insights concerning the semantic web technologies and in addition to that it also elucidates the issues as well as the solutions regarding the semantic web. The purpose of this chapter is to highlight the semantic web architecture in detail while also comparing it with the traditional search system. It classifies the semantic web architecture into three major pillars i.e. RDF, Ontology, and XML. Moreover, it describes different semantic web tools used in the framework and technology. It attempts to illustrate different approaches of the semantic web search engines. Besides stating numerous challenges faced by the semantic web it also illustrates the solutions.
  20. Huber, W.: Menschen, Götter und Maschinen : eine Ethik der Digitalisierung (2022) 0.01
    0.006671978 = product of:
      0.026687913 = sum of:
        0.026687913 = product of:
          0.053375825 = sum of:
            0.053375825 = weight(_text_:intelligent in 752) [ClassicSimilarity], result of:
              0.053375825 = score(doc=752,freq=2.0), product of:
                0.21440355 = queryWeight, product of:
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.038061365 = queryNorm
                0.24895029 = fieldWeight in 752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.03125 = fieldNorm(doc=752)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Die Digitalisierung hat unsere Privatsphäre ausgehöhlt, die Öffentlichkeit in antagonistische Teilöffentlichkeiten zerlegt, Hemmschwellen gesenkt und die Grenze zwischen Wahrheit und Lüge aufgeweicht. Wolfgang Huber beschreibt klar und pointiert diese technische und soziale Entwicklung. Er zeigt, wie sich konsensfähige ethische Prinzipien für den Umgang mit digitaler Intelligenz finden lassen und umgesetzt werden können - von der Gesetzgebung, von digitalen Anbietern und von allen Nutzern. Die Haltungen zur Digitalisierung schwanken zwischen Euphorie und Apokalypse: Die einen erwarten die Schaffung eines neuen Menschen, der sich selbst zum Gott macht. Andere befürchten den Verlust von Freiheit und Menschenwürde. Wolfgang Huber wirft demgegenüber einen realistischen Blick auf den technischen Umbruch. Das beginnt bei der Sprache: Sind die "sozialen Medien" wirklich sozial? Fährt ein mit digitaler Intelligenz ausgestattetes Auto "autonom" oder nicht eher automatisiert? Sind Algorithmen, die durch Mustererkennung lernen, deshalb "intelligent"? Eine überbordende Sprache lässt uns allzu oft vergessen, dass noch so leistungsstarke Rechner nur Maschinen sind, die von Menschen entwickelt und bedient werden. Notfalls muss man ihnen den Stecker ziehen. Das wunderbar anschaulich geschriebene Buch macht auf der Höhe der aktuellen ethischen Diskussionen bewusst, dass wir uns der Digitalisierung nicht ausliefern dürfen, sondern sie selbstbestimmt und verantwortlich gestalten können. 80. Geburtstag von Wolfgang Huber am 12.8.2022 Ein Heilmittel gegen allzu euphorische und apokalyptische Erwartungen an die Digitalisierung Wie wir unsere Haltung zur Digitalisierung ändern können, um uns nicht der Technik auszuliefern.

Languages

  • e 162
  • d 72

Types

  • a 213
  • el 52
  • m 7
  • p 2
  • A 1
  • EL 1
  • x 1
  • More… Less…