Search (117 results, page 1 of 6)

  • × year_i:[2020 TO 2030}
  1. Hertzum, M.: Information seeking by experimentation : trying something out to discover what happens (2023) 0.07
    0.07426543 = product of:
      0.14853086 = sum of:
        0.14853086 = sum of:
          0.107353695 = weight(_text_:class in 915) [ClassicSimilarity], result of:
            0.107353695 = score(doc=915,freq=2.0), product of:
              0.28640816 = queryWeight, product of:
                5.6542544 = idf(docFreq=420, maxDocs=44218)
                0.05065357 = queryNorm
              0.37482765 = fieldWeight in 915, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.6542544 = idf(docFreq=420, maxDocs=44218)
                0.046875 = fieldNorm(doc=915)
          0.041177157 = weight(_text_:22 in 915) [ClassicSimilarity], result of:
            0.041177157 = score(doc=915,freq=2.0), product of:
              0.17738017 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05065357 = queryNorm
              0.23214069 = fieldWeight in 915, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=915)
      0.5 = coord(1/2)
    
    Abstract
    Experimentation is the process of trying something out to discover what happens. It is a widespread information practice, yet often bypassed in information-behavior research. This article argues that experimentation complements prior knowledge, documents, and people as an important fourth class of information sources. Relative to the other classes, the distinguishing characteristics of experimentation are that it is a personal-as opposed to interpersonal-source and that it provides "backtalk." When the information seeker tries something out and then attends to the resulting situation, it is as though the materials of the situation talk back: They provide the information seeker with a situated and direct experience of the consequences of the tried-out options. In this way, experimentation involves obtaining information by creating it. It also involves turning material and behavioral processes into information interactions. Thereby, information seeking by experimentation is important to practical information literacy and extends information-behavior research with new insights on the interrelations between creating and seeking information.
    Date
    21. 3.2023 19:22:29
  2. Gnoli, C.: Faceted classifications as linked data : a logical analysis (2021) 0.06
    0.06001254 = product of:
      0.12002508 = sum of:
        0.12002508 = product of:
          0.24005017 = sum of:
            0.24005017 = weight(_text_:class in 452) [ClassicSimilarity], result of:
              0.24005017 = score(doc=452,freq=10.0), product of:
                0.28640816 = queryWeight, product of:
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.05065357 = queryNorm
                0.83814013 = fieldWeight in 452, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.046875 = fieldNorm(doc=452)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Faceted knowledge organization systems have sophisticated logical structures, making their representation as linked data a demanding task. The term facet is often used in ambiguous ways: while in thesauri facets only work as semantic categories, in classification schemes they also have syntactic functions. The need to convert the Integrative Levels Classification (ILC) into SKOS stimulated a more general analysis of the different kinds of syntactic facets, as can be represented in terms of RDF properties and their respective domain and range. A nomenclature is proposed, distinguishing between common facets, which can be appended to any class, that is, have an unrestricted domain; and special facets, which are exclusive to some class, that is, have a restricted domain. In both cases, foci can be taken from any other class (unrestricted range: free facets), or only from subclasses of an existing class (parallel facets), or be defined specifically for the present class (bound facets). Examples are given of such cases in ILC and in the Dewey Decimal Classification (DDC).
  3. Boczkowski, P.; Mitchelstein, E.: ¬The digital environment : How we live, learn, work, and play now (2021) 0.05
    0.049510285 = product of:
      0.09902057 = sum of:
        0.09902057 = sum of:
          0.07156913 = weight(_text_:class in 1003) [ClassicSimilarity], result of:
            0.07156913 = score(doc=1003,freq=2.0), product of:
              0.28640816 = queryWeight, product of:
                5.6542544 = idf(docFreq=420, maxDocs=44218)
                0.05065357 = queryNorm
              0.2498851 = fieldWeight in 1003, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.6542544 = idf(docFreq=420, maxDocs=44218)
                0.03125 = fieldNorm(doc=1003)
          0.027451439 = weight(_text_:22 in 1003) [ClassicSimilarity], result of:
            0.027451439 = score(doc=1003,freq=2.0), product of:
              0.17738017 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05065357 = queryNorm
              0.15476047 = fieldWeight in 1003, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1003)
      0.5 = coord(1/2)
    
    Abstract
    Increasingly we live through our personal screens; we work, play, socialize, and learn digitally. The shift to remote everything during the pandemic was another step in a decades-long march toward the digitization of everyday life made possible by innovations in media, information, and communication technology. In The Digital Environment, Pablo Boczkowski and Eugenia Mitchelstein offer a new way to understand the role of the digital in our daily lives, calling on us to turn our attention from our discrete devices and apps to the array of artifacts and practices that make up the digital environment that envelops every aspect of our social experience. Boczkowski and Mitchelstein explore a series of issues raised by the digital takeover of everyday life, drawing on interviews with a variety of experts. They show how existing inequities of gender, race, ethnicity, education, and class are baked into the design and deployment of technology, and describe emancipatory practices that counter this--including the use of Twitter as a platform for activism through such hashtags as #BlackLivesMatter and #MeToo. They discuss the digitization of parenting, schooling, and dating--noting, among other things, that today we can both begin and end relationships online. They describe how digital media shape our consumption of sports, entertainment, and news, and consider the dynamics of political campaigns, disinformation, and social activism. Finally, they report on developments in three areas that will be key to our digital future: data science, virtual reality, and space exploration.
    Date
    22. 6.2023 18:25:18
  4. Bianchini, C.; Bargioni, S.: Automated classification using linked open data : a case study on faceted classification and Wikidata (2021) 0.04
    0.04428114 = product of:
      0.08856228 = sum of:
        0.08856228 = product of:
          0.17712456 = sum of:
            0.17712456 = weight(_text_:class in 724) [ClassicSimilarity], result of:
              0.17712456 = score(doc=724,freq=4.0), product of:
                0.28640816 = queryWeight, product of:
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.05065357 = queryNorm
                0.6184341 = fieldWeight in 724, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=724)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Wikidata gadget, CCLitBox, for the automated classification of literary authors and works by a faceted classification and using Linked Open Data (LOD) is presented. The tool reproduces the classification algorithm of class O Literature of the Colon Classification and uses data freely available in Wikidata to create Colon Classification class numbers. CCLitBox is totally free and enables any user to classify literary authors and their works; it is easily accessible to everybody; it uses LOD from Wikidata but missing data for classification can be freely added if necessary; it is readymade for any cooperative and networked project.
  5. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.04
    0.040225647 = product of:
      0.080451295 = sum of:
        0.080451295 = product of:
          0.24135387 = sum of:
            0.24135387 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.24135387 = score(doc=862,freq=2.0), product of:
                0.4294415 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05065357 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  6. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.03
    0.033521377 = product of:
      0.06704275 = sum of:
        0.06704275 = product of:
          0.20112824 = sum of:
            0.20112824 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.20112824 = score(doc=5669,freq=2.0), product of:
                0.4294415 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05065357 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  7. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.03
    0.033521377 = product of:
      0.06704275 = sum of:
        0.06704275 = product of:
          0.20112824 = sum of:
            0.20112824 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.20112824 = score(doc=1000,freq=2.0), product of:
                0.4294415 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05065357 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  8. Pankowski, T.: Ontological databases with faceted queries (2022) 0.03
    0.031629387 = product of:
      0.063258775 = sum of:
        0.063258775 = product of:
          0.12651755 = sum of:
            0.12651755 = weight(_text_:class in 666) [ClassicSimilarity], result of:
              0.12651755 = score(doc=666,freq=4.0), product of:
                0.28640816 = queryWeight, product of:
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.05065357 = queryNorm
                0.44173864 = fieldWeight in 666, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=666)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The success of the use of ontology-based systems depends on efficient and user-friendly methods of formulating queries against the ontology. We propose a method to query a class of ontologies, called facet ontologies ( fac-ontologies ), using a faceted human-oriented approach. A fac-ontology has two important features: (a) a hierarchical view of it can be defined as a nested facet over this ontology and the view can be used as a faceted interface to create queries and to explore the ontology; (b) the ontology can be converted into an ontological database , the ABox of which is stored in a database, and the faceted queries are evaluated against this database. We show that the proposed faceted interface makes it possible to formulate queries that are semantically equivalent to $${\mathcal {SROIQ}}^{Fac}$$ SROIQ Fac , a limited version of the $${\mathcal {SROIQ}}$$ SROIQ description logic. The TBox of a fac-ontology is divided into a set of rules defining intensional predicates and a set of constraint rules to be satisfied by the database. We identify a class of so-called reflexive weak cycles in a set of constraint rules and propose a method to deal with them in the chase procedure. The considerations are illustrated with solutions implemented in the DAFO system ( data access based on faceted queries over ontologies ).
  9. Skulimowski, A.M.J.; Köhler, T.: ¬A future-oriented approach to the selection of artificial intelligence technologies for knowledge platforms (2023) 0.03
    0.031629387 = product of:
      0.063258775 = sum of:
        0.063258775 = product of:
          0.12651755 = sum of:
            0.12651755 = weight(_text_:class in 1015) [ClassicSimilarity], result of:
              0.12651755 = score(doc=1015,freq=4.0), product of:
                0.28640816 = queryWeight, product of:
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.05065357 = queryNorm
                0.44173864 = fieldWeight in 1015, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1015)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article presents approaches used to solve the problem of selecting AI technologies and tools to obtain the creativity fostering functionalities of an innovative knowledge platform. The aforementioned selection problem has been lagging behind other software-specific aspects of online knowledge platform and learning platform development so far. We linked technological recommendations from group decision support exercises to the platform design aims and constraints using an expert Delphi survey and multicriteria analysis methods. The links between expected advantages of using selected AI building tools, AI-related system functionalities, and their ongoing relevance until 2030 were assessed and used to optimize the learning scenarios and in planning the future development of the platform. The selected technologies allowed the platform management to implement the desired functionalities, thus harnessing the potential of open innovation platforms more effectively and delivering a model for the development of a relevant class of advanced open-access knowledge provision systems. Additionally, our approach is an essential part of digital sustainability and AI-alignment strategy for the aforementioned class of systems. The knowledge platform, which serves as a case study for our methodology has been developed within an EU Horizon 2020 research project.
  10. Day, R.E.: Occupational classes, information technologies and the wage (2020) 0.03
    0.026838424 = product of:
      0.053676847 = sum of:
        0.053676847 = product of:
          0.107353695 = sum of:
            0.107353695 = weight(_text_:class in 5932) [ClassicSimilarity], result of:
              0.107353695 = score(doc=5932,freq=2.0), product of:
                0.28640816 = queryWeight, product of:
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.05065357 = queryNorm
                0.37482765 = fieldWeight in 5932, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5932)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Occupational classifications mix epistemic and social notions of class in interesting ways that show not only the descriptive but also the prescriptive uses of documentality. In this paper, I would like to discuss how occupational classes have shifted from being a priori to being a posteriori documentary devices for both describing and prescribing labor. Post-coordinate indexing and algorithmic documentary systems must be viewed within post-Fordist constructions of identity and capitalism's construction of social sense by the wage if we are to have a better understanding of digital labor. In post-Fordist environments, documentation and its information technologies are not simply descriptive tools but are at the center of struggles of capital's prescription and direction of labor. Just like earlier documentary devices but even more prescriptively and socially internalized, information technology is not just a tool for users but rather is a device in the construction of such users and what they use (and are used by) at the level of their very being.
  11. Higgins, C.: 'I coulda had class' : the difficulties of classifying film in Library of Congress Classification and Dewey Decimal Classification (2022) 0.03
    0.026838424 = product of:
      0.053676847 = sum of:
        0.053676847 = product of:
          0.107353695 = sum of:
            0.107353695 = weight(_text_:class in 1095) [ClassicSimilarity], result of:
              0.107353695 = score(doc=1095,freq=2.0), product of:
                0.28640816 = queryWeight, product of:
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.05065357 = queryNorm
                0.37482765 = fieldWeight in 1095, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1095)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  12. ¬Der Student aus dem Computer (2023) 0.02
    0.024020009 = product of:
      0.048040017 = sum of:
        0.048040017 = product of:
          0.096080035 = sum of:
            0.096080035 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.096080035 = score(doc=1079,freq=2.0), product of:
                0.17738017 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05065357 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 1.2023 16:22:55
  13. Lindau, S.T.; Makelarski, J.A.; Abramsohn, E.M.; Beiser, D.G.; Boyd, K.; Huang, E.S.; Paradise, K.; Tung, E.L.: Sharing information about health-related resources : observations from a community resource referral intervention trial in a predominantly African American/Black community (2022) 0.02
    0.022365354 = product of:
      0.044730708 = sum of:
        0.044730708 = product of:
          0.089461416 = sum of:
            0.089461416 = weight(_text_:class in 502) [ClassicSimilarity], result of:
              0.089461416 = score(doc=502,freq=2.0), product of:
                0.28640816 = queryWeight, product of:
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.05065357 = queryNorm
                0.31235638 = fieldWeight in 502, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=502)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    CommunityRx is a theory-driven, information technology-based intervention, developed with and in a predominantly African American/Black community, that provides patients with personalized information (a "HealtheRx") about self-management and social care resources in their community. We described patient and clinician information sharing after exposure to the intervention during a clinical trial. Survey data from 145 patients (ages 45-74) and 121 clinicians were analyzed. Of patients who shared information at least once (49%), 47% reported sharing =3 times (range 1-14). Patient sharers were in poorer physical health (mean PCS 37.6 vs. 40.8, p = .05) than nonsharers and more likely to report going to a resource on their HealtheRx (79 vs. 41%, p = .05). Most patient sharers provided others a look at or copy of their HealtheRx, keeping the original. Patients used the HealtheRx to promote credibility of the information and communicate that resources were disease-specific and local. Half of clinicians shared HealtheRx resource information with peers; sharers were 3 times more likely than nonsharers to feel they were well-informed about resources to address social needs (55 vs. 18%, p < .01). Information sharing by clinicians and patients is an understudied mechanism that could amplify the effects of a growing class of community resource referral information technologies.
  14. Thomer, A.K.: Integrative data reuse at scientifically significant sites : case studies at Yellowstone National Park and the La Brea Tar Pits (2022) 0.02
    0.022365354 = product of:
      0.044730708 = sum of:
        0.044730708 = product of:
          0.089461416 = sum of:
            0.089461416 = weight(_text_:class in 639) [ClassicSimilarity], result of:
              0.089461416 = score(doc=639,freq=2.0), product of:
                0.28640816 = queryWeight, product of:
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.05065357 = queryNorm
                0.31235638 = fieldWeight in 639, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=639)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Scientifically significant sites are the source of, and long-term repository for, considerable amounts of data-particularly in the natural sciences. However, the unique data practices of the researchers and resource managers at these sites have been relatively understudied. Through case studies of two scientifically significant sites (the hot springs at Yellowstone National Park and the fossil deposits at the La Brea Tar Pits), I developed rich descriptions of site-based research and data curation, and high-level data models of information classes needed to support integrative data reuse. Each framework treats the geospatial site and its changing natural characteristics as a distinct class of information; more commonly considered information classes such as observational and sampling data, and project metadata, are defined in relation to the site itself. This work contributes (a) case studies of the values and data needs for researchers and resource managers at scientifically significant sites, (b) an information framework to support integrative reuse at these sites, and (c) a discussion of data practices at scientifically significant sites.
  15. Tao, J.; Zhou, L.; Hickey, K.: Making sense of the black-boxes : toward interpretable text classification using deep learning models (2023) 0.02
    0.022365354 = product of:
      0.044730708 = sum of:
        0.044730708 = product of:
          0.089461416 = sum of:
            0.089461416 = weight(_text_:class in 990) [ClassicSimilarity], result of:
              0.089461416 = score(doc=990,freq=2.0), product of:
                0.28640816 = queryWeight, product of:
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.05065357 = queryNorm
                0.31235638 = fieldWeight in 990, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=990)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Text classification is a common task in data science. Despite the superior performances of deep learning based models in various text classification tasks, their black-box nature poses significant challenges for wide adoption. The knowledge-to-action framework emphasizes several principles concerning the application and use of knowledge, such as ease-of-use, customization, and feedback. With the guidance of the above principles and the properties of interpretable machine learning, we identify the design requirements for and propose an interpretable deep learning (IDeL) based framework for text classification models. IDeL comprises three main components: feature penetration, instance aggregation, and feature perturbation. We evaluate our implementation of the framework with two distinct case studies: fake news detection and social question categorization. The experiment results provide evidence for the efficacy of IDeL components in enhancing the interpretability of text classification models. Moreover, the findings are generalizable across binary and multi-label, multi-class classification problems. The proposed IDeL framework introduce a unique iField perspective for building trusted models in data science by improving the transparency and access to advanced black-box models.
  16. Safder, I.; Ali, M.; Aljohani, N.R.; Nawaz, R.; Hassan, S.-U.: Neural machine translation for in-text citation classification (2023) 0.02
    0.022365354 = product of:
      0.044730708 = sum of:
        0.044730708 = product of:
          0.089461416 = sum of:
            0.089461416 = weight(_text_:class in 1053) [ClassicSimilarity], result of:
              0.089461416 = score(doc=1053,freq=2.0), product of:
                0.28640816 = queryWeight, product of:
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.05065357 = queryNorm
                0.31235638 = fieldWeight in 1053, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1053)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The quality of scientific publications can be measured by quantitative indices such as the h-index, Source Normalized Impact per Paper, or g-index. However, these measures lack to explain the function or reasons for citations and the context of citations from citing publication to cited publication. We argue that citation context may be considered while calculating the impact of research work. However, mining citation context from unstructured full-text publications is a challenging task. In this paper, we compiled a data set comprising 9,518 citations context. We developed a deep learning-based architecture for citation context classification. Unlike feature-based state-of-the-art models, our proposed focal-loss and class-weight-aware BiLSTM model with pretrained GloVe embedding vectors use citation context as input to outperform them in multiclass citation context classification tasks. Our model improves on the baseline state-of-the-art by achieving an F1 score of 0.80 with an accuracy of 0.81 for citation context classification. Moreover, we delve into the effects of using different word embeddings on the performance of the classification model and draw a comparison between fastText, GloVe, and spaCy pretrained word embeddings.
  17. Bagatini, J.A.; Chaves Guimarães, J.A.: Algorithmic discriminations and their ethical impacts on knowledge organization : a thematic domain-analysis (2023) 0.02
    0.022365354 = product of:
      0.044730708 = sum of:
        0.044730708 = product of:
          0.089461416 = sum of:
            0.089461416 = weight(_text_:class in 1134) [ClassicSimilarity], result of:
              0.089461416 = score(doc=1134,freq=2.0), product of:
                0.28640816 = queryWeight, product of:
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.05065357 = queryNorm
                0.31235638 = fieldWeight in 1134, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1134)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Personal data play a fundamental role in contemporary socioeconomic dynamics, with one of its primary aspects being the potential to facilitate discriminatory situations. This situation impacts the knowledge organization field especially because it considers personal data as elements (facets) to categorize persons under an economic and sometimes discriminatory perspective. The research corpus was collected at Scopus and Web of Science until the end of 2021, under the terms "data discrimination", "algorithmic bias", "algorithmic discrimination" and "fair algorithms". The obtained results allowed to infer that the analyzed knowledge domain predominantly incorporates personal data, whether in its behavioral dimension or in the scope of the so-called sensitive data. These data are susceptible to the action of algorithms of different orders, such as relevance, filtering, predictive, social ranking, content recommendation and random classification. Such algorithms can have discriminatory biases in their programming related to gender, sexual orientation, race, nationality, religion, age, social class, socioeconomic profile, physical appearance, and political positioning.
  18. Jaeger, L.: Wissenschaftler versus Wissenschaft (2020) 0.02
    0.020588579 = product of:
      0.041177157 = sum of:
        0.041177157 = product of:
          0.082354315 = sum of:
            0.082354315 = weight(_text_:22 in 4156) [ClassicSimilarity], result of:
              0.082354315 = score(doc=4156,freq=2.0), product of:
                0.17738017 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05065357 = queryNorm
                0.46428138 = fieldWeight in 4156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4156)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    2. 3.2020 14:08:22
  19. Ibrahim, G.M.; Taylor, M.: Krebszellen manipulieren Neurone : Gliome (2023) 0.02
    0.020588579 = product of:
      0.041177157 = sum of:
        0.041177157 = product of:
          0.082354315 = sum of:
            0.082354315 = weight(_text_:22 in 1203) [ClassicSimilarity], result of:
              0.082354315 = score(doc=1203,freq=2.0), product of:
                0.17738017 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05065357 = queryNorm
                0.46428138 = fieldWeight in 1203, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1203)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Spektrum der Wissenschaft. 2023, H.10, S.22-24
  20. St Jean, B.; Gorham, U.; Bonsignore, E.: Understanding human information behavior : when, how, and why people interact with information (2021) 0.02
    0.017892282 = product of:
      0.035784565 = sum of:
        0.035784565 = product of:
          0.07156913 = sum of:
            0.07156913 = weight(_text_:class in 205) [ClassicSimilarity], result of:
              0.07156913 = score(doc=205,freq=2.0), product of:
                0.28640816 = queryWeight, product of:
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.05065357 = queryNorm
                0.2498851 = fieldWeight in 205, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.03125 = fieldNorm(doc=205)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This comprehensive text is the ideal resource for teaching human information behavior to undergraduate students. The text is organized in a thoughtful way to address all the most important aspects in an easy to digest manner, with the latter part of the book focusing on key areas of study within the information behavior field. The real world examples included in the text will appeal to undergraduate students and help them connect to what information behavior looks like in practice. The authors write in a winningly approachable style that will help students connect with the key concepts. I particularly like the inclusion of Discussion Questions which can be used by instructors as either homework or in class discussion points to foster a rich dialogue about each of the chapters. Applicable research studies are introduced in the text in an approachable way which will facilitate undergraduate engagement with the ongoing work in the discipline. The acronyms list and glossary at the back of the book are two additional, helpful resources for undergraduates to get caught up to speed on the most important topics under the umbrella of human information behavior.-- [Emily Vardell, PhD, assistant professor, School of Library and Information Management, Emporia State University]. Extremely accessible, comprehensive, and useful, Understanding Human Information Behavior: When, How, and Why People Interact with Information discusses the relevance and significance of its subject to our work and everyday life and is well-positioned to empower students to become helpful information and technology professionals.-- [Yan Zhang, associate professor, School of Information, The University of Texas at Austin].

Languages

  • e 88
  • d 29

Types

  • a 109
  • el 21
  • m 3
  • p 2
  • x 1
  • More… Less…