Search (21368 results, page 1 of 1069)

  1. Teixera Lopes, C.; Paiva, D.; Ribeiro, C.: Effects of language and terminology of query suggestions on medical accuracy considering different user characteristics (2017) 0.11
    0.10894963 = product of:
      0.16342445 = sum of:
        0.021602063 = weight(_text_:to in 3783) [ClassicSimilarity], result of:
          0.021602063 = score(doc=3783,freq=12.0), product of:
            0.0878089 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04829837 = queryNorm
            0.24601223 = fieldWeight in 3783, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3783)
        0.14182238 = product of:
          0.28364477 = sum of:
            0.28364477 = weight(_text_:2075 in 3783) [ClassicSimilarity], result of:
              0.28364477 = score(doc=3783,freq=2.0), product of:
                0.49798483 = queryWeight, product of:
                  10.310593 = idf(docFreq=3, maxDocs=44218)
                  0.04829837 = queryNorm
                0.56958514 = fieldWeight in 3783, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  10.310593 = idf(docFreq=3, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3783)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Searching for health information is one of the most popular activities on the web. In this domain, users often misspell or lack knowledge of the proper medical terms to use in queries. To overcome these difficulties and attempt to retrieve higher-quality content, we developed a query suggestion system that provides alternative queries combining the Portuguese or English language with lay or medico-scientific terminology. Here we evaluate this system's impact on the medical accuracy of the knowledge acquired during the search. Evaluation shows that simply providing these suggestions contributes to reduce the quantity of incorrect content. This indicates that even when suggestions are not clicked, they are useful either for subsequent queries' formulation or for interpreting search results. Clicking on suggestions, regardless of type, leads to answers with more correct content. An analysis by type of suggestion and user characteristics showed that the benefits of certain languages and terminologies are more perceptible in users with certain levels of English proficiency and health literacy. This suggests a personalization of this suggestion system toward these characteristics. Overall, the effect of language is more preponderant than the effect of terminology. Clicks on English suggestions are clearly preferable to clicks on Portuguese ones.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.9, S.2063-2075
  2. Mari, H.: Dos fundamentos da significao a producao do sentido (1996) 0.10
    0.09984626 = product of:
      0.14976938 = sum of:
        0.12216152 = product of:
          0.36648455 = sum of:
            0.36648455 = weight(_text_:object's in 819) [ClassicSimilarity], result of:
              0.36648455 = score(doc=819,freq=2.0), product of:
                0.4784015 = queryWeight, product of:
                  9.905128 = idf(docFreq=5, maxDocs=44218)
                  0.04829837 = queryNorm
                0.7660606 = fieldWeight in 819, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  9.905128 = idf(docFreq=5, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=819)
          0.33333334 = coord(1/3)
        0.027607854 = weight(_text_:to in 819) [ClassicSimilarity], result of:
          0.027607854 = score(doc=819,freq=10.0), product of:
            0.0878089 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04829837 = queryNorm
            0.3144084 = fieldWeight in 819, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0546875 = fieldNorm(doc=819)
      0.6666667 = coord(2/3)
    
    Abstract
    An approach to establishing a relationship between knowing, informing and representing, using aspects of linguistic theory to clarify semantic theory as the basis for an overall theory of meaning. Linguistic knowledge is based on a conceptual matrix which defines convergence / divergence of the categories used to specify an object's parameters; work on the analysis of discourse emphasisis the social dimension of meaning, which is the basis of the theory of acts and speech. The evaluation criteria used to determine questions about the possibility of knowledge are necessarily decisive, this opens up promising perspectives if formulating a relationship between conceptual and pragmatic approaches
    Footnote
    Übers. d. Titels: From the fundamentals of signification to the production of meaning
  3. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.10
    0.096992694 = product of:
      0.14548904 = sum of:
        0.12785102 = product of:
          0.38355306 = sum of:
            0.38355306 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.38355306 = score(doc=1826,freq=2.0), product of:
                0.4094741 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04829837 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
        0.017638013 = weight(_text_:to in 1826) [ClassicSimilarity], result of:
          0.017638013 = score(doc=1826,freq=2.0), product of:
            0.0878089 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04829837 = queryNorm
            0.20086816 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.6666667 = coord(2/3)
    
    Content
    Präsentation anlässlich: European Conference on Data Analysis (ECDA 2014) in Bremen, Germany, July 2nd to 4th 2014, LIS-Workshop.
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  4. Access project team: ACCESS: new OPAC interfaces at the Library of Congress put a new face on software development (1991) 0.09
    0.093598 = product of:
      0.280794 = sum of:
        0.280794 = product of:
          0.561588 = sum of:
            0.561588 = weight(_text_:2075 in 2074) [ClassicSimilarity], result of:
              0.561588 = score(doc=2074,freq=1.0), product of:
                0.49798483 = queryWeight, product of:
                  10.310593 = idf(docFreq=3, maxDocs=44218)
                  0.04829837 = queryNorm
                1.1277211 = fieldWeight in 2074, product of:
                  1.0 = tf(freq=1.0), with freq of:
                    1.0 = termFreq=1.0
                  10.310593 = idf(docFreq=3, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2074)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    2075
  5. Rauber, A.: Digital preservation in data-driven science : on the importance of process capture, preservation and validation (2012) 0.09
    0.0884729 = product of:
      0.13270935 = sum of:
        0.10470988 = product of:
          0.31412962 = sum of:
            0.31412962 = weight(_text_:object's in 469) [ClassicSimilarity], result of:
              0.31412962 = score(doc=469,freq=2.0), product of:
                0.4784015 = queryWeight, product of:
                  9.905128 = idf(docFreq=5, maxDocs=44218)
                  0.04829837 = queryNorm
                0.65662336 = fieldWeight in 469, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  9.905128 = idf(docFreq=5, maxDocs=44218)
                  0.046875 = fieldNorm(doc=469)
          0.33333334 = coord(1/3)
        0.027999476 = weight(_text_:to in 469) [ClassicSimilarity], result of:
          0.027999476 = score(doc=469,freq=14.0), product of:
            0.0878089 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04829837 = queryNorm
            0.3188683 = fieldWeight in 469, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.046875 = fieldNorm(doc=469)
      0.6666667 = coord(2/3)
    
    Abstract
    Current digital preservation is strongly biased towards data objects: digital files of document-style objects, or encapsulated and largely self-contained objects. To provide authenticity and provenance information, comprehensive metadata models are deployed to document information on an object's context. Yet, we claim that simply documenting an objects context may not be sufficient to ensure proper provenance and to fulfill the stated preservation goals. Specifically in e-Science and business settings, capturing, documenting and preserving entire processes may be necessary to meet the preservation goals. We thus present an approach for capturing, documenting and preserving processes, and means to assess their authenticity upon re-execution. We will discuss options as well as limitations and open challenges to achieve sound preservation, speci?cally within scientific processes.
  6. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.08
    0.081490636 = product of:
      0.122235954 = sum of:
        0.10228082 = product of:
          0.30684245 = sum of:
            0.30684245 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.30684245 = score(doc=230,freq=2.0), product of:
                0.4094741 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04829837 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
        0.019955132 = weight(_text_:to in 230) [ClassicSimilarity], result of:
          0.019955132 = score(doc=230,freq=4.0), product of:
            0.0878089 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04829837 = queryNorm
            0.22725637 = fieldWeight in 230, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
      0.6666667 = coord(2/3)
    
    Abstract
    In this lecture I intend to challenge those who uphold a monist or even a dualist view of the universe; and I will propose, instead, a pluralist view. I will propose a view of the universe that recognizes at least three different but interacting sub-universes.
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  7. Bordogna, G.; Pagani, M.: ¬A flexible content-based image retrieval model and a customizable system for the retrieval of shapes (2010) 0.07
    0.07131876 = product of:
      0.10697813 = sum of:
        0.087258235 = product of:
          0.2617747 = sum of:
            0.2617747 = weight(_text_:object's in 3450) [ClassicSimilarity], result of:
              0.2617747 = score(doc=3450,freq=2.0), product of:
                0.4784015 = queryWeight, product of:
                  9.905128 = idf(docFreq=5, maxDocs=44218)
                  0.04829837 = queryNorm
                0.54718614 = fieldWeight in 3450, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  9.905128 = idf(docFreq=5, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3450)
          0.33333334 = coord(1/3)
        0.019719897 = weight(_text_:to in 3450) [ClassicSimilarity], result of:
          0.019719897 = score(doc=3450,freq=10.0), product of:
            0.0878089 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04829837 = queryNorm
            0.22457743 = fieldWeight in 3450, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3450)
      0.6666667 = coord(2/3)
    
    Abstract
    The authors describe a flexible model and a system for content-based image retrieval of objects' shapes. Flexibility is intended as the possibility of customizing the system behavior to the user's needs and perceptions. This is achieved by allowing users to modify the retrieval function. The system implementing this model uses multiple representations to characterize some macroscopic characteristics of the objects shapes. Specifically, the shape indexes describe the global features of the object's contour (represented by the Fourier coefficients), the contour's irregularities (represented by the multifractal spectrum), and the presence of concavities and convexities (represented by the contour scale space distribution). During a query formulation, the user can specify both the preference for the macroscopic shape aspects that he or she considers meaningful for the retrieval, and the desired level of accuracy of the matching, which means that the visual query shape must be considered with a given tolerance in representing the desired shapes. The evaluation experiments showed that this system can be suited to different retrieval behaviors, and that, generally, the combination of the multiple shape representations increases both recall and precision with respect to the application of any single representation.
  8. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.07
    0.07130431 = product of:
      0.10695646 = sum of:
        0.08949572 = product of:
          0.26848716 = sum of:
            0.26848716 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.26848716 = score(doc=306,freq=2.0), product of:
                0.4094741 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04829837 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
        0.01746074 = weight(_text_:to in 306) [ClassicSimilarity], result of:
          0.01746074 = score(doc=306,freq=4.0), product of:
            0.0878089 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04829837 = queryNorm
            0.19884932 = fieldWeight in 306, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
      0.6666667 = coord(2/3)
    
    Abstract
    Although service-oriented architectures go a long way toward providing interoperability in distributed, heterogeneous environments, managing semantic differences in such environments remains a challenge. We give an overview of the issue of semantic interoperability (integration), provide a semantic characterization of services, and discuss the role of ontologies. Then we analyze four basic models of semantic interoperability that differ in respect to their mapping between service descriptions and ontologies and in respect to where the evaluation of the integration logic is performed. We also provide some guidelines for selecting one of the possible interoperability models.
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  9. Mas, S.; Marleau, Y.: Proposition of a faceted classification model to support corporate information organization and digital records management (2009) 0.07
    0.069806725 = product of:
      0.10471009 = sum of:
        0.07671061 = product of:
          0.23013183 = sum of:
            0.23013183 = weight(_text_:3a in 2918) [ClassicSimilarity], result of:
              0.23013183 = score(doc=2918,freq=2.0), product of:
                0.4094741 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04829837 = queryNorm
                0.56201804 = fieldWeight in 2918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2918)
          0.33333334 = coord(1/3)
        0.027999476 = weight(_text_:to in 2918) [ClassicSimilarity], result of:
          0.027999476 = score(doc=2918,freq=14.0), product of:
            0.0878089 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04829837 = queryNorm
            0.3188683 = fieldWeight in 2918, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
      0.6666667 = coord(2/3)
    
    Abstract
    The employees of an organization often use a personal hierarchical classification scheme to organize digital documents that are stored on their own workstations. As this may make it hard for other employees to retrieve these documents, there is a risk that the organization will lose track of needed documentation. Furthermore, the inherent boundaries of such a hierarchical structure require making arbitrary decisions about which specific criteria the classification will b.e based on (for instance, the administrative activity or the document type, although a document can have several attributes and require classification in several classes).A faceted classification model to support corporate information organization is proposed. Partially based on Ranganathan's facets theory, this model aims not only to standardize the organization of digital documents, but also to simplify the management of a document throughout its life cycle for both individuals and organizations, while ensuring compliance to regulatory and policy requirements.
    Footnote
    Vgl.: http://ieeexplore.ieee.org/Xplore/login.jsp?reload=true&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4755313%2F4755314%2F04755480.pdf%3Farnumber%3D4755480&authDecision=-203.
  10. Lutz, H.: Back to business : was CompuServe Unternehmen bietet (1997) 0.07
    0.068170026 = product of:
      0.10225504 = sum of:
        0.02822082 = weight(_text_:to in 6569) [ClassicSimilarity], result of:
          0.02822082 = score(doc=6569,freq=2.0), product of:
            0.0878089 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04829837 = queryNorm
            0.32138905 = fieldWeight in 6569, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.125 = fieldNorm(doc=6569)
        0.07403422 = product of:
          0.14806844 = sum of:
            0.14806844 = weight(_text_:22 in 6569) [ClassicSimilarity], result of:
              0.14806844 = score(doc=6569,freq=4.0), product of:
                0.16913266 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04829837 = queryNorm
                0.8754574 = fieldWeight in 6569, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=6569)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    22. 2.1997 19:50:29
    Source
    Cogito. 1997, H.1, S.22-23
  11. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.07
    0.06691633 = product of:
      0.10037449 = sum of:
        0.07671061 = product of:
          0.23013183 = sum of:
            0.23013183 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.23013183 = score(doc=400,freq=2.0), product of:
                0.4094741 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04829837 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.023663877 = weight(_text_:to in 400) [ClassicSimilarity], result of:
          0.023663877 = score(doc=400,freq=10.0), product of:
            0.0878089 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04829837 = queryNorm
            0.26949292 = fieldWeight in 400, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.6666667 = coord(2/3)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  12. Hubert, M.; Griesbaum, J.; Womser-Hacker, C.: Usability von Browsererweiterungen zum Schutz vor Tracking (2020) 0.07
    0.066183776 = product of:
      0.19855133 = sum of:
        0.19855133 = product of:
          0.39710265 = sum of:
            0.39710265 = weight(_text_:2075 in 5866) [ClassicSimilarity], result of:
              0.39710265 = score(doc=5866,freq=2.0), product of:
                0.49798483 = queryWeight, product of:
                  10.310593 = idf(docFreq=3, maxDocs=44218)
                  0.04829837 = queryNorm
                0.7974192 = fieldWeight in 5866, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  10.310593 = idf(docFreq=3, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5866)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Content
    Vgl.: https://doi.org/10.1515/iwp-2020-2075.
  13. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.06
    0.06422794 = product of:
      0.0963419 = sum of:
        0.07671061 = product of:
          0.23013183 = sum of:
            0.23013183 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.23013183 = score(doc=562,freq=2.0), product of:
                0.4094741 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04829837 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.019631287 = product of:
          0.039262574 = sum of:
            0.039262574 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.039262574 = score(doc=562,freq=2.0), product of:
                0.16913266 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04829837 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  14. Egghe, L.: Properties of the n-overlap vector and n-overlap similarity theory (2006) 0.06
    0.064051494 = product of:
      0.09607724 = sum of:
        0.087258235 = product of:
          0.2617747 = sum of:
            0.2617747 = weight(_text_:object's in 194) [ClassicSimilarity], result of:
              0.2617747 = score(doc=194,freq=2.0), product of:
                0.4784015 = queryWeight, product of:
                  9.905128 = idf(docFreq=5, maxDocs=44218)
                  0.04829837 = queryNorm
                0.54718614 = fieldWeight in 194, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  9.905128 = idf(docFreq=5, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=194)
          0.33333334 = coord(1/3)
        0.008819006 = weight(_text_:to in 194) [ClassicSimilarity], result of:
          0.008819006 = score(doc=194,freq=2.0), product of:
            0.0878089 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04829837 = queryNorm
            0.10043408 = fieldWeight in 194, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0390625 = fieldNorm(doc=194)
      0.6666667 = coord(2/3)
    
    Abstract
    In the first part of this article the author defines the n-overlap vector whose coordinates consist of the fraction of the objects (e.g., books, N-grams, etc.) that belong to 1, 2, , n sets (more generally: families) (e.g., libraries, databases, etc.). With the aid of the Lorenz concentration theory, a theory of n-overlap similarity is conceived together with corresponding measures, such as the generalized Jaccard index (generalizing the well-known Jaccard index in case n 5 2). Next, the distributional form of the n-overlap vector is determined assuming certain distributions of the object's and of the set (family) sizes. In this section the decreasing power law and decreasing exponential distribution is explained for the n-overlap vector. Both item (token) n-overlap and source (type) n-overlap are studied. The n-overlap properties of objects indexed by a hierarchical system (e.g., books indexed by numbers from a UDC or Dewey system or by N-grams) are presented in the final section. The author shows how the results given in the previous section can be applied as well as how the Lorenz order of the n-overlap vector is respected by an increase or a decrease of the level of refinement in the hierarchical system (e.g., the value N in N-grams).
  15. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.06
    0.063360386 = product of:
      0.095040575 = sum of:
        0.07671061 = product of:
          0.23013183 = sum of:
            0.23013183 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.23013183 = score(doc=862,freq=2.0), product of:
                0.4094741 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04829837 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.018329961 = weight(_text_:to in 862) [ClassicSimilarity], result of:
          0.018329961 = score(doc=862,freq=6.0), product of:
            0.0878089 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04829837 = queryNorm
            0.20874833 = fieldWeight in 862, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.6666667 = coord(2/3)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  16. Li, L.; Shang, Y.; Zhang, W.: Improvement of HITS-based algorithms on Web documents 0.06
    0.061117973 = product of:
      0.09167696 = sum of:
        0.07671061 = product of:
          0.23013183 = sum of:
            0.23013183 = weight(_text_:3a in 2514) [ClassicSimilarity], result of:
              0.23013183 = score(doc=2514,freq=2.0), product of:
                0.4094741 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04829837 = queryNorm
                0.56201804 = fieldWeight in 2514, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2514)
          0.33333334 = coord(1/3)
        0.014966349 = weight(_text_:to in 2514) [ClassicSimilarity], result of:
          0.014966349 = score(doc=2514,freq=4.0), product of:
            0.0878089 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04829837 = queryNorm
            0.17044228 = fieldWeight in 2514, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.046875 = fieldNorm(doc=2514)
      0.6666667 = coord(2/3)
    
    Abstract
    In this paper, we present two ways to improve the precision of HITS-based algorithms onWeb documents. First, by analyzing the limitations of current HITS-based algorithms, we propose a new weighted HITS-based method that assigns appropriate weights to in-links of root documents. Then, we combine content analysis with HITS-based algorithms and study the effects of four representative relevance scoring methods, VSM, Okapi, TLS, and CDR, using a set of broad topic queries. Our experimental results show that our weighted HITS-based method performs significantly better than Bharat's improved HITS algorithm. When we combine our weighted HITS-based method or Bharat's HITS algorithm with any of the four relevance scoring methods, the combined methods are only marginally better than our weighted HITS-based method. Between the four relevance scoring methods, there is no significant quality difference when they are combined with a HITS-based algorithm.
    Content
    Vgl.: http%3A%2F%2Fdelab.csd.auth.gr%2F~dimitris%2Fcourses%2Fir_spring06%2Fpage_rank_computing%2Fp527-li.pdf. Vgl. auch: http://www2002.org/CDROM/refereed/643/.
  17. Hemminger, B.M.: Introduction to the special issue on bioinformatics (2005) 0.06
    0.059648775 = product of:
      0.08947316 = sum of:
        0.024693217 = weight(_text_:to in 4189) [ClassicSimilarity], result of:
          0.024693217 = score(doc=4189,freq=2.0), product of:
            0.0878089 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04829837 = queryNorm
            0.28121543 = fieldWeight in 4189, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.109375 = fieldNorm(doc=4189)
        0.064779945 = product of:
          0.12955989 = sum of:
            0.12955989 = weight(_text_:22 in 4189) [ClassicSimilarity], result of:
              0.12955989 = score(doc=4189,freq=4.0), product of:
                0.16913266 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04829837 = queryNorm
                0.76602525 = fieldWeight in 4189, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4189)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    22. 7.2006 14:19:22
  18. Buzydlowski, J.W.; White, H.D.; Lin, X.: Term Co-occurrence Analysis as an Interface for Digital Libraries (2002) 0.06
    0.05944693 = product of:
      0.0891704 = sum of:
        0.021165613 = weight(_text_:to in 1339) [ClassicSimilarity], result of:
          0.021165613 = score(doc=1339,freq=2.0), product of:
            0.0878089 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04829837 = queryNorm
            0.24104178 = fieldWeight in 1339, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.09375 = fieldNorm(doc=1339)
        0.06800478 = product of:
          0.13600956 = sum of:
            0.13600956 = weight(_text_:22 in 1339) [ClassicSimilarity], result of:
              0.13600956 = score(doc=1339,freq=6.0), product of:
                0.16913266 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04829837 = queryNorm
                0.804159 = fieldWeight in 1339, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1339)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    22. 2.2003 17:25:39
    22. 2.2003 18:16:22
    Source
    Visual Interfaces to Digital Libraries. Eds.: Börner, K. u. C. Chen
  19. Piros, A.: Az ETO-jelzetek automatikus interpretálásának és elemzésének kérdései (2018) 0.06
    0.05817227 = product of:
      0.087258406 = sum of:
        0.06392551 = product of:
          0.19177653 = sum of:
            0.19177653 = weight(_text_:3a in 855) [ClassicSimilarity], result of:
              0.19177653 = score(doc=855,freq=2.0), product of:
                0.4094741 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04829837 = queryNorm
                0.46834838 = fieldWeight in 855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=855)
          0.33333334 = coord(1/3)
        0.023332896 = weight(_text_:to in 855) [ClassicSimilarity], result of:
          0.023332896 = score(doc=855,freq=14.0), product of:
            0.0878089 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04829837 = queryNorm
            0.2657236 = fieldWeight in 855, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
      0.6666667 = coord(2/3)
    
    Abstract
    Converting UDC numbers manually to a complex format such as the one mentioned above is an unrealistic expectation; supporting building these representations, as far as possible automatically, is a well-founded requirement. An additional advantage of this approach is that the existing records could also be processed and converted. In my dissertation I would like to prove also that it is possible to design and implement an algorithm that is able to convert pre-coordinated UDC numbers into the introduced format by identifying all their elements and revealing their whole syntactic structure as well. In my dissertation I will discuss a feasible way of building a UDC-specific XML schema for describing the most detailed and complicated UDC numbers (containing not only the common auxiliary signs and numbers, but also the different types of special auxiliaries). The schema definition is available online at: http://piros.udc-interpreter.hu#xsd. The primary goal of my research is to prove that it is possible to support building, retrieving, and analyzing UDC numbers without compromises, by taking the whole syntactic richness of the scheme by storing the UDC numbers reserving the meaning of pre-coordination. The research has also included the implementation of a software that parses UDC classmarks attended to prove that such solution can be applied automatically without any additional effort or even retrospectively on existing collections.
    Content
    Vgl. auch: New automatic interpreter for complex UDC numbers. Unter: <https%3A%2F%2Fudcc.org%2Ffiles%2FAttilaPiros_EC_36-37_2014-2015.pdf&usg=AOvVaw3kc9CwDDCWP7aArpfjrs5b>
  20. Chen, C.: Top Ten Problems in Visual Interfaces to Digital Libraries (2002) 0.06
    0.056972247 = product of:
      0.08545837 = sum of:
        0.029932698 = weight(_text_:to in 4840) [ClassicSimilarity], result of:
          0.029932698 = score(doc=4840,freq=4.0), product of:
            0.0878089 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04829837 = queryNorm
            0.34088457 = fieldWeight in 4840, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.09375 = fieldNorm(doc=4840)
        0.055525668 = product of:
          0.111051336 = sum of:
            0.111051336 = weight(_text_:22 in 4840) [ClassicSimilarity], result of:
              0.111051336 = score(doc=4840,freq=4.0), product of:
                0.16913266 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04829837 = queryNorm
                0.6565931 = fieldWeight in 4840, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4840)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    22. 2.2003 17:25:39
    22. 2.2003 18:13:11
    Source
    Visual Interfaces to Digital Libraries. Eds.: Börner, K. u. C. Chen

Authors

Languages

Types

Themes

Subjects

Classifications