Search (38324 results, page 1 of 1917)

  1. Ackermann, E.: Piaget's constructivism, Papert's constructionism : what's the difference? (2001) 0.18
    0.1844457 = sum of:
      0.054374374 = product of:
        0.16312312 = sum of:
          0.16312312 = weight(_text_:3a in 692) [ClassicSimilarity], result of:
            0.16312312 = score(doc=692,freq=2.0), product of:
              0.3482944 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.041082088 = queryNorm
              0.46834838 = fieldWeight in 692, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.0390625 = fieldNorm(doc=692)
        0.33333334 = coord(1/3)
      0.12745823 = product of:
        0.25491646 = sum of:
          0.25491646 = weight(_text_:2c in 692) [ClassicSimilarity], result of:
            0.25491646 = score(doc=692,freq=2.0), product of:
              0.43539926 = queryWeight, product of:
                10.598275 = idf(docFreq=2, maxDocs=44218)
                0.041082088 = queryNorm
              0.5854775 = fieldWeight in 692, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                10.598275 = idf(docFreq=2, maxDocs=44218)
                0.0390625 = fieldNorm(doc=692)
        0.5 = coord(1/2)
      0.002613077 = product of:
        0.005226154 = sum of:
          0.005226154 = weight(_text_:a in 692) [ClassicSimilarity], result of:
            0.005226154 = score(doc=692,freq=6.0), product of:
              0.047369577 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041082088 = queryNorm
              0.11032722 = fieldWeight in 692, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=692)
        0.5 = coord(1/2)
    
    Abstract
    What is the difference between Piaget's constructivism and Papert's "constructionism"? Beyond the mere play on the words, I think the distinction holds, and that integrating both views can enrich our understanding of how people learn and grow. Piaget's constructivism offers a window into what children are interested in, and able to achieve, at different stages of their development. The theory describes how children's ways of doing and thinking evolve over time, and under which circumstance children are more likely to let go of-or hold onto- their currently held views. Piaget suggests that children have very good reasons not to abandon their worldviews just because someone else, be it an expert, tells them they're wrong. Papert's constructionism, in contrast, focuses more on the art of learning, or 'learning to learn', and on the significance of making things in learning. Papert is interested in how learners engage in a conversation with [their own or other people's] artifacts, and how these conversations boost self-directed learning, and ultimately facilitate the construction of new knowledge. He stresses the importance of tools, media, and context in human development. Integrating both perspectives illuminates the processes by which individuals come to make sense of their experience, gradually optimizing their interactions with the world.
    Content
    Vgl.: https://www.semanticscholar.org/paper/Piaget-%E2%80%99-s-Constructivism-%2C-Papert-%E2%80%99-s-%3A-What-%E2%80%99-s-Ackermann/89cbcc1e740a4591443ff4765a6ae8df0fdf5554. Darunter weitere Hinweise auf verwandte Beiträge. Auch unter: Learning Group Publication 5(2001) no.3, S.438.
    Type
    a
  2. Gödert, W.; Hubrich, J.; Boteram, F.: Thematische Recherche und Interoperabilität : Wege zur Optimierung des Zugriffs auf heterogen erschlossene Dokumente (2009) 0.11
    0.10553722 = product of:
      0.15830582 = sum of:
        0.12745823 = product of:
          0.25491646 = sum of:
            0.25491646 = weight(_text_:2c in 193) [ClassicSimilarity], result of:
              0.25491646 = score(doc=193,freq=2.0), product of:
                0.43539926 = queryWeight, product of:
                  10.598275 = idf(docFreq=2, maxDocs=44218)
                  0.041082088 = queryNorm
                0.5854775 = fieldWeight in 193, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  10.598275 = idf(docFreq=2, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=193)
          0.5 = coord(1/2)
        0.0308476 = sum of:
          0.003017321 = weight(_text_:a in 193) [ClassicSimilarity], result of:
            0.003017321 = score(doc=193,freq=2.0), product of:
              0.047369577 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041082088 = queryNorm
              0.06369744 = fieldWeight in 193, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=193)
          0.027830279 = weight(_text_:22 in 193) [ClassicSimilarity], result of:
            0.027830279 = score(doc=193,freq=2.0), product of:
              0.14386247 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041082088 = queryNorm
              0.19345059 = fieldWeight in 193, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=193)
      0.6666667 = coord(2/3)
    
    Source
    https://opus4.kobv.de/opus4-bib-info/frontdoor/index/index/searchtype/authorsearch/author/%22Hubrich%2C+Jessica%22/docId/703/start/0/rows/20
    Type
    a
  3. Iandoli, L.; Quinto, I.; De Liddo, A.; Shum, S.B.: On online collaboration and construction of shared knowledge : assessing mediation capability in computer supported argument visualization tools (2016) 0.09
    0.09444296 = product of:
      0.14166445 = sum of:
        0.1373973 = product of:
          0.2747946 = sum of:
            0.2747946 = weight(_text_:liddo in 2890) [ClassicSimilarity], result of:
              0.2747946 = score(doc=2890,freq=2.0), product of:
                0.45205662 = queryWeight, product of:
                  11.00374 = idf(docFreq=1, maxDocs=44218)
                  0.041082088 = queryNorm
                0.60787654 = fieldWeight in 2890, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  11.00374 = idf(docFreq=1, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2890)
          0.5 = coord(1/2)
        0.0042671366 = product of:
          0.008534273 = sum of:
            0.008534273 = weight(_text_:a in 2890) [ClassicSimilarity], result of:
              0.008534273 = score(doc=2890,freq=16.0), product of:
                0.047369577 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041082088 = queryNorm
                0.18016359 = fieldWeight in 2890, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2890)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Collaborative Computer-Supported Argument Visualization (CCSAV) has often been proposed as an alternative over more conventional, mainstream platforms for online discussion (e.g., online forums and wikis). CCSAV tools require users to contribute to the creation of a joint artifact (argument map) instead of contributing to a conversation. In this paper we assess empirically the effects of this fundamental design choice and show that the absence of conversational affordances and socially salient information in representation-centric tools is detrimental to the users' collaboration experience. We report empirical findings from a study in which subjects using different collaborative platforms (a forum, an argumentation platform, and a socially augmented argumentation tool) were asked to discuss and predict the price of a commodity. By comparing users' experience across several metrics we found evidence that the collaborative performance decreases gradually when we remove conversational interaction and other types of socially salient information. We interpret these findings through theories developed in conversational analysis (common ground theory) and communities of practice and discuss design implications. In particular, we propose balancing the trade-off between knowledge reification and participation in representation-centric tools with the provision of social feedback and functionalities supporting meaning negotiation.
    Type
    a
  4. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.07
    0.070591435 = product of:
      0.10588715 = sum of:
        0.06524925 = product of:
          0.19574773 = sum of:
            0.19574773 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.19574773 = score(doc=562,freq=2.0), product of:
                0.3482944 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.041082088 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.040637903 = sum of:
          0.007241571 = weight(_text_:a in 562) [ClassicSimilarity], result of:
            0.007241571 = score(doc=562,freq=8.0), product of:
              0.047369577 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041082088 = queryNorm
              0.15287387 = fieldWeight in 562, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
          0.033396333 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.033396333 = score(doc=562,freq=2.0), product of:
              0.14386247 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041082088 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
      0.6666667 = coord(2/3)
    
    Abstract
    Document representations for text classification are typically based on the classical Bag-Of-Words paradigm. This approach comes with deficiencies that motivate the integration of features on a higher semantic level than single words. In this paper we propose an enhancement of the classical document representation through concepts extracted from background knowledge. Boosting is used for actual classification. Experimental evaluations on two well known text corpora support our approach through consistent improvement of the results.
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
    Type
    a
  5. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.06
    0.0615977 = product of:
      0.09239655 = sum of:
        0.086999 = product of:
          0.260997 = sum of:
            0.260997 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.260997 = score(doc=230,freq=2.0), product of:
                0.3482944 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.041082088 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
        0.0053975484 = product of:
          0.010795097 = sum of:
            0.010795097 = weight(_text_:a in 230) [ClassicSimilarity], result of:
              0.010795097 = score(doc=230,freq=10.0), product of:
                0.047369577 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041082088 = queryNorm
                0.22789092 = fieldWeight in 230, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In this lecture I intend to challenge those who uphold a monist or even a dualist view of the universe; and I will propose, instead, a pluralist view. I will propose a view of the universe that recognizes at least three different but interacting sub-universes.
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
    Type
    a
  6. Schrodt, R.: Tiefen und Untiefen im wissenschaftlichen Sprachgebrauch (2008) 0.06
    0.05960857 = product of:
      0.08941285 = sum of:
        0.086999 = product of:
          0.260997 = sum of:
            0.260997 = weight(_text_:3a in 140) [ClassicSimilarity], result of:
              0.260997 = score(doc=140,freq=2.0), product of:
                0.3482944 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.041082088 = queryNorm
                0.7493574 = fieldWeight in 140, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=140)
          0.33333334 = coord(1/3)
        0.0024138568 = product of:
          0.0048277136 = sum of:
            0.0048277136 = weight(_text_:a in 140) [ClassicSimilarity], result of:
              0.0048277136 = score(doc=140,freq=2.0), product of:
                0.047369577 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041082088 = queryNorm
                0.10191591 = fieldWeight in 140, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=140)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    Vgl. auch: https://studylibde.com/doc/13053640/richard-schrodt. Vgl. auch: http%3A%2F%2Fwww.univie.ac.at%2FGermanistik%2Fschrodt%2Fvorlesung%2Fwissenschaftssprache.doc&usg=AOvVaw1lDLDR6NFf1W0-oC9mEUJf.
    Type
    a
  7. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.05
    0.053565588 = product of:
      0.08034838 = sum of:
        0.07612413 = product of:
          0.22837238 = sum of:
            0.22837238 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.22837238 = score(doc=306,freq=2.0), product of:
                0.3482944 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.041082088 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
        0.0042242496 = product of:
          0.008448499 = sum of:
            0.008448499 = weight(_text_:a in 306) [ClassicSimilarity], result of:
              0.008448499 = score(doc=306,freq=8.0), product of:
                0.047369577 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041082088 = queryNorm
                0.17835285 = fieldWeight in 306, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Although service-oriented architectures go a long way toward providing interoperability in distributed, heterogeneous environments, managing semantic differences in such environments remains a challenge. We give an overview of the issue of semantic interoperability (integration), provide a semantic characterization of services, and discuss the role of ontologies. Then we analyze four basic models of semantic interoperability that differ in respect to their mapping between service descriptions and ontologies and in respect to where the evaluation of the integration logic is performed. We also provide some guidelines for selecting one of the possible interoperability models.
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
    Type
    a
  8. Alexandre Hannud Abdo, A.H. => Hannud Abdo, A.: 0.05
    0.04935616 = product of:
      0.14806847 = sum of:
        0.14806847 = sum of:
          0.014483142 = weight(_text_:a in 617) [ClassicSimilarity], result of:
            0.014483142 = score(doc=617,freq=2.0), product of:
              0.047369577 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041082088 = queryNorm
              0.30574775 = fieldWeight in 617, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.1875 = fieldNorm(doc=617)
          0.13358533 = weight(_text_:22 in 617) [ClassicSimilarity], result of:
            0.13358533 = score(doc=617,freq=2.0), product of:
              0.14386247 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041082088 = queryNorm
              0.92856276 = fieldWeight in 617, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.1875 = fieldNorm(doc=617)
      0.33333334 = coord(1/3)
    
    Date
    7. 6.2022 19:22:19
  9. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.05
    0.04785114 = product of:
      0.07177671 = sum of:
        0.06524925 = product of:
          0.19574773 = sum of:
            0.19574773 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.19574773 = score(doc=400,freq=2.0), product of:
                0.3482944 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.041082088 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.006527463 = product of:
          0.013054926 = sum of:
            0.013054926 = weight(_text_:a in 400) [ClassicSimilarity], result of:
              0.013054926 = score(doc=400,freq=26.0), product of:
                0.047369577 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041082088 = queryNorm
                0.27559727 = fieldWeight in 400, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
    Type
    a
  10. Mas, S.; Marleau, Y.: Proposition of a faceted classification model to support corporate information organization and digital records management (2009) 0.05
    0.046913207 = product of:
      0.07036981 = sum of:
        0.06524925 = product of:
          0.19574773 = sum of:
            0.19574773 = weight(_text_:3a in 2918) [ClassicSimilarity], result of:
              0.19574773 = score(doc=2918,freq=2.0), product of:
                0.3482944 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.041082088 = queryNorm
                0.56201804 = fieldWeight in 2918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2918)
          0.33333334 = coord(1/3)
        0.005120564 = product of:
          0.010241128 = sum of:
            0.010241128 = weight(_text_:a in 2918) [ClassicSimilarity], result of:
              0.010241128 = score(doc=2918,freq=16.0), product of:
                0.047369577 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041082088 = queryNorm
                0.2161963 = fieldWeight in 2918, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2918)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The employees of an organization often use a personal hierarchical classification scheme to organize digital documents that are stored on their own workstations. As this may make it hard for other employees to retrieve these documents, there is a risk that the organization will lose track of needed documentation. Furthermore, the inherent boundaries of such a hierarchical structure require making arbitrary decisions about which specific criteria the classification will b.e based on (for instance, the administrative activity or the document type, although a document can have several attributes and require classification in several classes).A faceted classification model to support corporate information organization is proposed. Partially based on Ranganathan's facets theory, this model aims not only to standardize the organization of digital documents, but also to simplify the management of a document throughout its life cycle for both individuals and organizations, while ensuring compliance to regulatory and policy requirements.
    Footnote
    Vgl.: http://ieeexplore.ieee.org/Xplore/login.jsp?reload=true&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4755313%2F4755314%2F04755480.pdf%3Farnumber%3D4755480&authDecision=-203.
    Type
    a
  11. Diederichs, A.: Wissensmanagement ist Macht : Effektiv und kostenbewußt arbeiten im Informationszeitalter (2005) 0.05
    0.046533436 = product of:
      0.1396003 = sum of:
        0.1396003 = sum of:
          0.013654836 = weight(_text_:a in 3211) [ClassicSimilarity], result of:
            0.013654836 = score(doc=3211,freq=4.0), product of:
              0.047369577 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041082088 = queryNorm
              0.28826174 = fieldWeight in 3211, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.125 = fieldNorm(doc=3211)
          0.12594546 = weight(_text_:22 in 3211) [ClassicSimilarity], result of:
            0.12594546 = score(doc=3211,freq=4.0), product of:
              0.14386247 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041082088 = queryNorm
              0.8754574 = fieldWeight in 3211, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.125 = fieldNorm(doc=3211)
      0.33333334 = coord(1/3)
    
    Date
    22. 2.2005 9:16:22
    Type
    a
  12. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.05
    0.046198275 = product of:
      0.06929741 = sum of:
        0.06524925 = product of:
          0.19574773 = sum of:
            0.19574773 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.19574773 = score(doc=862,freq=2.0), product of:
                0.3482944 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.041082088 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.004048161 = product of:
          0.008096322 = sum of:
            0.008096322 = weight(_text_:a in 862) [ClassicSimilarity], result of:
              0.008096322 = score(doc=862,freq=10.0), product of:
                0.047369577 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041082088 = queryNorm
                0.1709182 = fieldWeight in 862, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
    Type
    a
  13. Li, L.; Shang, Y.; Zhang, W.: Improvement of HITS-based algorithms on Web documents 0.05
    0.04591336 = product of:
      0.06887004 = sum of:
        0.06524925 = product of:
          0.19574773 = sum of:
            0.19574773 = weight(_text_:3a in 2514) [ClassicSimilarity], result of:
              0.19574773 = score(doc=2514,freq=2.0), product of:
                0.3482944 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.041082088 = queryNorm
                0.56201804 = fieldWeight in 2514, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2514)
          0.33333334 = coord(1/3)
        0.0036207854 = product of:
          0.007241571 = sum of:
            0.007241571 = weight(_text_:a in 2514) [ClassicSimilarity], result of:
              0.007241571 = score(doc=2514,freq=8.0), product of:
                0.047369577 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041082088 = queryNorm
                0.15287387 = fieldWeight in 2514, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2514)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In this paper, we present two ways to improve the precision of HITS-based algorithms onWeb documents. First, by analyzing the limitations of current HITS-based algorithms, we propose a new weighted HITS-based method that assigns appropriate weights to in-links of root documents. Then, we combine content analysis with HITS-based algorithms and study the effects of four representative relevance scoring methods, VSM, Okapi, TLS, and CDR, using a set of broad topic queries. Our experimental results show that our weighted HITS-based method performs significantly better than Bharat's improved HITS algorithm. When we combine our weighted HITS-based method or Bharat's HITS algorithm with any of the four relevance scoring methods, the combined methods are only marginally better than our weighted HITS-based method. Between the four relevance scoring methods, there is no significant quality difference when they are combined with a HITS-based algorithm.
    Content
    Vgl.: http%3A%2F%2Fdelab.csd.auth.gr%2F~dimitris%2Fcourses%2Fir_spring06%2Fpage_rank_computing%2Fp527-li.pdf. Vgl. auch: http://www2002.org/CDROM/refereed/643/.
    Type
    a
  14. Fachsystematik Bremen nebst Schlüssel 1970 ff. (1970 ff) 0.05
    0.04552634 = product of:
      0.06828951 = sum of:
        0.054374374 = product of:
          0.16312312 = sum of:
            0.16312312 = weight(_text_:3a in 3577) [ClassicSimilarity], result of:
              0.16312312 = score(doc=3577,freq=2.0), product of:
                0.3482944 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.041082088 = queryNorm
                0.46834838 = fieldWeight in 3577, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3577)
          0.33333334 = coord(1/3)
        0.013915139 = product of:
          0.027830279 = sum of:
            0.027830279 = weight(_text_:22 in 3577) [ClassicSimilarity], result of:
              0.027830279 = score(doc=3577,freq=2.0), product of:
                0.14386247 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041082088 = queryNorm
                0.19345059 = fieldWeight in 3577, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3577)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    1. Agrarwissenschaften 1981. - 3. Allgemeine Geographie 2.1972. - 3a. Allgemeine Naturwissenschaften 1.1973. - 4. Allgemeine Sprachwissenschaft, Allgemeine Literaturwissenschaft 2.1971. - 6. Allgemeines. 5.1983. - 7. Anglistik 3.1976. - 8. Astronomie, Geodäsie 4.1977. - 12. bio Biologie, bcp Biochemie-Biophysik, bot Botanik, zoo Zoologie 1981. - 13. Bremensien 3.1983. - 13a. Buch- und Bibliothekswesen 3.1975. - 14. Chemie 4.1977. - 14a. Elektrotechnik 1974. - 15 Ethnologie 2.1976. - 16,1. Geowissenschaften. Sachteil 3.1977. - 16,2. Geowissenschaften. Regionaler Teil 3.1977. - 17. Germanistik 6.1984. - 17a,1. Geschichte. Teilsystematik hil. - 17a,2. Geschichte. Teilsystematik his Neuere Geschichte. - 17a,3. Geschichte. Teilsystematik hit Neueste Geschichte. - 18. Humanbiologie 2.1983. - 19. Ingenieurwissenschaften 1974. - 20. siehe 14a. - 21. klassische Philologie 3.1977. - 22. Klinische Medizin 1975. - 23. Kunstgeschichte 2.1971. - 24. Kybernetik. 2.1975. - 25. Mathematik 3.1974. - 26. Medizin 1976. - 26a. Militärwissenschaft 1985. - 27. Musikwissenschaft 1978. - 27a. Noten 2.1974. - 28. Ozeanographie 3.1977. -29. Pädagogik 8.1985. - 30. Philosphie 3.1974. - 31. Physik 3.1974. - 33. Politik, Politische Wissenschaft, Sozialwissenschaft. Soziologie. Länderschlüssel. Register 1981. - 34. Psychologie 2.1972. - 35. Publizistik und Kommunikationswissenschaft 1985. - 36. Rechtswissenschaften 1986. - 37. Regionale Geograpgie 3.1975. - 37a. Religionswissenschaft 1970. - 38. Romanistik 3.1976. - 39. Skandinavistik 4.1985. - 40. Slavistik 1977. - 40a. Sonstige Sprachen und Literaturen 1973. - 43. Sport 4.1983. - 44. Theaterwissenschaft 1985. - 45. Theologie 2.1976. - 45a. Ur- und Frühgeschichte, Archäologie 1970. - 47. Volkskunde 1976. - 47a. Wirtschaftswissenschaften 1971 // Schlüssel: 1. Länderschlüssel 1971. - 2. Formenschlüssel (Kurzform) 1974. - 3. Personenschlüssel Literatur 5. Fassung 1968
  15. Suchenwirth, L.: Sacherschliessung in Zeiten von Corona : neue Herausforderungen und Chancen (2019) 0.05
    0.045206353 = product of:
      0.06780953 = sum of:
        0.06524925 = product of:
          0.19574773 = sum of:
            0.19574773 = weight(_text_:3a in 484) [ClassicSimilarity], result of:
              0.19574773 = score(doc=484,freq=2.0), product of:
                0.3482944 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.041082088 = queryNorm
                0.56201804 = fieldWeight in 484, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=484)
          0.33333334 = coord(1/3)
        0.002560282 = product of:
          0.005120564 = sum of:
            0.005120564 = weight(_text_:a in 484) [ClassicSimilarity], result of:
              0.005120564 = score(doc=484,freq=4.0), product of:
                0.047369577 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041082088 = queryNorm
                0.10809815 = fieldWeight in 484, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=484)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Footnote
    https%3A%2F%2Fjournals.univie.ac.at%2Findex.php%2Fvoebm%2Farticle%2Fdownload%2F5332%2F5271%2F&usg=AOvVaw2yQdFGHlmOwVls7ANCpTii.
    Location
    A
    Type
    a
  16. Jascó, P.: Searching for images by similarity online (1998) 0.05
    0.0452003 = product of:
      0.1356009 = sum of:
        0.1356009 = sum of:
          0.009655427 = weight(_text_:a in 393) [ClassicSimilarity], result of:
            0.009655427 = score(doc=393,freq=2.0), product of:
              0.047369577 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041082088 = queryNorm
              0.20383182 = fieldWeight in 393, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.125 = fieldNorm(doc=393)
          0.12594546 = weight(_text_:22 in 393) [ClassicSimilarity], result of:
            0.12594546 = score(doc=393,freq=4.0), product of:
              0.14386247 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041082088 = queryNorm
              0.8754574 = fieldWeight in 393, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.125 = fieldNorm(doc=393)
      0.33333334 = coord(1/3)
    
    Date
    29.11.2004 13:03:22
    Source
    Online. 22(1998) no.6, S.99-102
    Type
    a
  17. Rübesame, O.: Probleme des geographischen Schlüssels (1963) 0.05
    0.0452003 = product of:
      0.1356009 = sum of:
        0.1356009 = sum of:
          0.009655427 = weight(_text_:a in 134) [ClassicSimilarity], result of:
            0.009655427 = score(doc=134,freq=2.0), product of:
              0.047369577 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041082088 = queryNorm
              0.20383182 = fieldWeight in 134, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.125 = fieldNorm(doc=134)
          0.12594546 = weight(_text_:22 in 134) [ClassicSimilarity], result of:
            0.12594546 = score(doc=134,freq=4.0), product of:
              0.14386247 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041082088 = queryNorm
              0.8754574 = fieldWeight in 134, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.125 = fieldNorm(doc=134)
      0.33333334 = coord(1/3)
    
    Date
    17. 1.1999 13:22:22
    Type
    a
  18. Lutz, H.: Back to business : was CompuServe Unternehmen bietet (1997) 0.05
    0.0452003 = product of:
      0.1356009 = sum of:
        0.1356009 = sum of:
          0.009655427 = weight(_text_:a in 6569) [ClassicSimilarity], result of:
            0.009655427 = score(doc=6569,freq=2.0), product of:
              0.047369577 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041082088 = queryNorm
              0.20383182 = fieldWeight in 6569, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.125 = fieldNorm(doc=6569)
          0.12594546 = weight(_text_:22 in 6569) [ClassicSimilarity], result of:
            0.12594546 = score(doc=6569,freq=4.0), product of:
              0.14386247 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041082088 = queryNorm
              0.8754574 = fieldWeight in 6569, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.125 = fieldNorm(doc=6569)
      0.33333334 = coord(1/3)
    
    Date
    22. 2.1997 19:50:29
    Source
    Cogito. 1997, H.1, S.22-23
    Type
    a
  19. Klauß, H.: SISIS : 10. Anwenderforum Berlin-Brandenburg (1999) 0.05
    0.0452003 = product of:
      0.1356009 = sum of:
        0.1356009 = sum of:
          0.009655427 = weight(_text_:a in 463) [ClassicSimilarity], result of:
            0.009655427 = score(doc=463,freq=2.0), product of:
              0.047369577 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041082088 = queryNorm
              0.20383182 = fieldWeight in 463, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.125 = fieldNorm(doc=463)
          0.12594546 = weight(_text_:22 in 463) [ClassicSimilarity], result of:
            0.12594546 = score(doc=463,freq=4.0), product of:
              0.14386247 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041082088 = queryNorm
              0.8754574 = fieldWeight in 463, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.125 = fieldNorm(doc=463)
      0.33333334 = coord(1/3)
    
    Date
    22. 2.1999 10:22:52
    Type
    a
  20. fwt: Wie das Gehirn Bilder 'liest' (1999) 0.05
    0.0452003 = product of:
      0.1356009 = sum of:
        0.1356009 = sum of:
          0.009655427 = weight(_text_:a in 4042) [ClassicSimilarity], result of:
            0.009655427 = score(doc=4042,freq=2.0), product of:
              0.047369577 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041082088 = queryNorm
              0.20383182 = fieldWeight in 4042, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.125 = fieldNorm(doc=4042)
          0.12594546 = weight(_text_:22 in 4042) [ClassicSimilarity], result of:
            0.12594546 = score(doc=4042,freq=4.0), product of:
              0.14386247 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041082088 = queryNorm
              0.8754574 = fieldWeight in 4042, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.125 = fieldNorm(doc=4042)
      0.33333334 = coord(1/3)
    
    Date
    22. 7.2000 19:01:22
    Type
    a

Authors

Languages

Types

Themes

Subjects

Classifications