Search (33 results, page 1 of 2)

  • × type_ss:"p"
  1. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.38
    0.37973848 = product of:
      0.8679737 = sum of:
        0.04987225 = product of:
          0.14961675 = sum of:
            0.14961675 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.14961675 = score(doc=862,freq=2.0), product of:
                0.26621342 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.031400457 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.07001776 = weight(_text_:2.0 in 862) [ClassicSimilarity], result of:
          0.07001776 = score(doc=862,freq=2.0), product of:
            0.18211427 = queryWeight, product of:
              5.799733 = idf(docFreq=363, maxDocs=44218)
              0.031400457 = queryNorm
            0.3844716 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.799733 = idf(docFreq=363, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.14961675 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.14961675 = score(doc=862,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.14961675 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.14961675 = score(doc=862,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.14961675 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.14961675 = score(doc=862,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.14961675 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.14961675 = score(doc=862,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.14961675 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.14961675 = score(doc=862,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.4375 = coord(7/16)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  2. Breuer, T.; Tavakolpoursaleh, N.; Schaer, P.; Hienert, D.; Schaible, J.; Castro, L.J.: Online Information Retrieval Evaluation using the STELLA Framework (2022) 0.04
    0.036828764 = product of:
      0.11785204 = sum of:
        0.022169823 = weight(_text_:web in 640) [ClassicSimilarity], result of:
          0.022169823 = score(doc=640,freq=2.0), product of:
            0.10247572 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031400457 = queryNorm
            0.21634221 = fieldWeight in 640, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=640)
        0.009586309 = product of:
          0.019172618 = sum of:
            0.019172618 = weight(_text_:online in 640) [ClassicSimilarity], result of:
              0.019172618 = score(doc=640,freq=2.0), product of:
                0.09529729 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031400457 = queryNorm
                0.20118743 = fieldWeight in 640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=640)
          0.5 = coord(1/2)
        0.012829596 = weight(_text_:information in 640) [ClassicSimilarity], result of:
          0.012829596 = score(doc=640,freq=8.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.23274569 = fieldWeight in 640, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=640)
        0.02693603 = weight(_text_:retrieval in 640) [ClassicSimilarity], result of:
          0.02693603 = score(doc=640,freq=4.0), product of:
            0.09498371 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.031400457 = queryNorm
            0.2835858 = fieldWeight in 640, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=640)
        0.04633028 = weight(_text_:software in 640) [ClassicSimilarity], result of:
          0.04633028 = score(doc=640,freq=4.0), product of:
            0.124570385 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031400457 = queryNorm
            0.3719205 = fieldWeight in 640, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=640)
      0.3125 = coord(5/16)
    
    Abstract
    Involving users in early phases of software development has become a common strategy as it enables developers to consider user needs from the beginning. Once a system is in production, new opportunities to observe, evaluate and learn from users emerge as more information becomes available. Gathering information from users to continuously evaluate their behavior is a common practice for commercial software, while the Cranfield paradigm remains the preferred option for Information Retrieval (IR) and recommendation systems in the academic world. Here we introduce the Infrastructures for Living Labs STELLA project which aims to create an evaluation infrastructure allowing experimental systems to run along production web-based academic search systems with real users. STELLA combines user interactions and log files analyses to enable large-scale A/B experiments for academic search.
  3. Walker, S.: ¬Der Mensch-Maschine-Dialog bei Online-Benutzer-Katalogen (1987) 0.03
    0.025762051 = product of:
      0.20609641 = sum of:
        0.18053292 = weight(_text_:benutzer in 827) [ClassicSimilarity], result of:
          0.18053292 = score(doc=827,freq=2.0), product of:
            0.17907447 = queryWeight, product of:
              5.7029257 = idf(docFreq=400, maxDocs=44218)
              0.031400457 = queryNorm
            1.0081444 = fieldWeight in 827, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7029257 = idf(docFreq=400, maxDocs=44218)
              0.125 = fieldNorm(doc=827)
        0.02556349 = product of:
          0.05112698 = sum of:
            0.05112698 = weight(_text_:online in 827) [ClassicSimilarity], result of:
              0.05112698 = score(doc=827,freq=2.0), product of:
                0.09529729 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031400457 = queryNorm
                0.5364998 = fieldWeight in 827, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.125 = fieldNorm(doc=827)
          0.5 = coord(1/2)
      0.125 = coord(2/16)
    
  4. Williamson, N.J.: Online Klassifikation : Gegenwart und Zukunft (1988) 0.01
    0.010867912 = product of:
      0.0869433 = sum of:
        0.036152236 = product of:
          0.07230447 = sum of:
            0.07230447 = weight(_text_:online in 765) [ClassicSimilarity], result of:
              0.07230447 = score(doc=765,freq=4.0), product of:
                0.09529729 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031400457 = queryNorm
                0.75872535 = fieldWeight in 765, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.125 = fieldNorm(doc=765)
          0.5 = coord(1/2)
        0.050791066 = weight(_text_:retrieval in 765) [ClassicSimilarity], result of:
          0.050791066 = score(doc=765,freq=2.0), product of:
            0.09498371 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.031400457 = queryNorm
            0.5347345 = fieldWeight in 765, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.125 = fieldNorm(doc=765)
      0.125 = coord(2/16)
    
    Theme
    Klassifikationssysteme im Online-Retrieval
  5. Robertson, S.E.: OKAPI at TREC-3 (1995) 0.01
    0.010716753 = product of:
      0.05715602 = sum of:
        0.0111840265 = product of:
          0.022368053 = sum of:
            0.022368053 = weight(_text_:online in 5694) [ClassicSimilarity], result of:
              0.022368053 = score(doc=5694,freq=2.0), product of:
                0.09529729 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031400457 = queryNorm
                0.23471867 = fieldWeight in 5694, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5694)
          0.5 = coord(1/2)
        0.0074839313 = weight(_text_:information in 5694) [ClassicSimilarity], result of:
          0.0074839313 = score(doc=5694,freq=2.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.13576832 = fieldWeight in 5694, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5694)
        0.03848806 = weight(_text_:retrieval in 5694) [ClassicSimilarity], result of:
          0.03848806 = score(doc=5694,freq=6.0), product of:
            0.09498371 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.031400457 = queryNorm
            0.40520695 = fieldWeight in 5694, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5694)
      0.1875 = coord(3/16)
    
    Abstract
    Reports text information retrieval experiments performed as part of the 3 rd round of Text Retrieval Conferences (TREC) using the Okapi online catalogue system at City University, UK. The emphasis in TREC-3 was: further refinement of term weighting functions; an investigation of run time passage determination and searching; expansion of ad hoc queries by terms extracted from the top documents retrieved by a trial search; new methods for choosing query expansion terms after relevance feedback, now split into methods of ranking terms prior to selection and subsequent selection procedures; and the development of a user interface procedure within the new TREC interactive search framework
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  6. Robertson, S.E.: OKAPI at TREC (1994) 0.01
    0.006948089 = product of:
      0.055584714 = sum of:
        0.01069133 = weight(_text_:information in 7952) [ClassicSimilarity], result of:
          0.01069133 = score(doc=7952,freq=2.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.19395474 = fieldWeight in 7952, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=7952)
        0.044893384 = weight(_text_:retrieval in 7952) [ClassicSimilarity], result of:
          0.044893384 = score(doc=7952,freq=4.0), product of:
            0.09498371 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.031400457 = queryNorm
            0.47264296 = fieldWeight in 7952, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.078125 = fieldNorm(doc=7952)
      0.125 = coord(2/16)
    
    Abstract
    Paper presented at the Text Retrieval Conference (TREC), Washington, DC, Nov 1992. Describes the OKAPI experimental text information retrieval system in terms of its design principles: the use of simple, robust and easy to use techniques which use best match searching and avoid Boolean logic
  7. Tramullas, J.; Garrido-Picazo, P.; Sánchez-Casabón, A.I.: Use of Wikipedia categories on information retrieval research : a brief review (2020) 0.01
    0.0063653616 = product of:
      0.050922893 = sum of:
        0.012829596 = weight(_text_:information in 5365) [ClassicSimilarity], result of:
          0.012829596 = score(doc=5365,freq=8.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.23274569 = fieldWeight in 5365, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5365)
        0.0380933 = weight(_text_:retrieval in 5365) [ClassicSimilarity], result of:
          0.0380933 = score(doc=5365,freq=8.0), product of:
            0.09498371 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.031400457 = queryNorm
            0.40105087 = fieldWeight in 5365, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=5365)
      0.125 = coord(2/16)
    
    Abstract
    Wikipedia categories, a classification scheme built for organizing and describing Wikpedia articles, are being applied in computer science research. This paper adopts a systematic literature review approach, in order to identify different approaches and uses of Wikipedia categories in information retrieval research. Several types of work are identified, depending on the intrinsic study of the categories structure, or its use as a tool for the processing and analysis of other documentary corpus different to Wikipedia. Information retrieval is identified as one of the major areas of use, in particular its application in the refinement and improvement of search expressions, and the construction of textual corpus. However, the set of available works shows that in many cases research approaches applied and results obtained can be integrated into a comprehensive and inclusive concept of information retrieval.
  8. Kemp, A. de: Information provision : a publisher's point of view in changing times and with new technologies (1993) 0.01
    0.0057351375 = product of:
      0.0458811 = sum of:
        0.033785243 = product of:
          0.067570485 = sum of:
            0.067570485 = weight(_text_:publizieren in 6235) [ClassicSimilarity], result of:
              0.067570485 = score(doc=6235,freq=2.0), product of:
                0.15493481 = queryWeight, product of:
                  4.934158 = idf(docFreq=864, maxDocs=44218)
                  0.031400457 = queryNorm
                0.43612206 = fieldWeight in 6235, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.934158 = idf(docFreq=864, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6235)
          0.5 = coord(1/2)
        0.012095859 = weight(_text_:information in 6235) [ClassicSimilarity], result of:
          0.012095859 = score(doc=6235,freq=4.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.21943474 = fieldWeight in 6235, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=6235)
      0.125 = coord(2/16)
    
    Abstract
    Almost everybody seems to be talking about document delivery and digital libraries. Library networks are starting joint ventures with journal subscription agencies and offering electronic tables of contents. Integrated systems for image management and document management are being implemented. Academic networks and Internet are being used at an exponential rate. At the same time budgets for the acquisition of books and journals are shrinking and alternatives for the delivery of information are being discussed. Are there alternatives and what will be their impact?
    Theme
    Elektronisches Publizieren
  9. Aydin, Ö.; Karaarslan, E.: OpenAI ChatGPT generated literature review: : digital twin in healthcare (2022) 0.01
    0.0053505185 = product of:
      0.042804148 = sum of:
        0.038527615 = weight(_text_:wide in 851) [ClassicSimilarity], result of:
          0.038527615 = score(doc=851,freq=4.0), product of:
            0.13912784 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.031400457 = queryNorm
            0.2769224 = fieldWeight in 851, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=851)
        0.004276532 = weight(_text_:information in 851) [ClassicSimilarity], result of:
          0.004276532 = score(doc=851,freq=2.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.0775819 = fieldWeight in 851, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=851)
      0.125 = coord(2/16)
    
    Abstract
    Literature review articles are essential to summarize the related work in the selected field. However, covering all related studies takes too much time and effort. This study questions how Artificial Intelligence can be used in this process. We used ChatGPT to create a literature review article to show the stage of the OpenAI ChatGPT artificial intelligence application. As the subject, the applications of Digital Twin in the health field were chosen. Abstracts of the last three years (2020, 2021 and 2022) papers were obtained from the keyword "Digital twin in healthcare" search results on Google Scholar and paraphrased by ChatGPT. Later on, we asked ChatGPT questions. The results are promising; however, the paraphrased parts had significant matches when checked with the Ithenticate tool. This article is the first attempt to show the compilation and expression of knowledge will be accelerated with the help of artificial intelligence. We are still at the beginning of such advances. The future academic publishing process will require less human effort, which in turn will allow academics to focus on their studies. In future studies, we will monitor citations to this study to evaluate the academic validity of the content produced by the ChatGPT. 1. Introduction OpenAI ChatGPT (ChatGPT, 2022) is a chatbot based on the OpenAI GPT-3 language model. It is designed to generate human-like text responses to user input in a conversational context. OpenAI ChatGPT is trained on a large dataset of human conversations and can be used to create responses to a wide range of topics and prompts. The chatbot can be used for customer service, content creation, and language translation tasks, creating replies in multiple languages. OpenAI ChatGPT is available through the OpenAI API, which allows developers to access and integrate the chatbot into their applications and systems. OpenAI ChatGPT is a variant of the GPT (Generative Pre-trained Transformer) language model developed by OpenAI. It is designed to generate human-like text, allowing it to engage in conversation with users naturally and intuitively. OpenAI ChatGPT is trained on a large dataset of human conversations, allowing it to understand and respond to a wide range of topics and contexts. It can be used in various applications, such as chatbots, customer service agents, and language translation systems. OpenAI ChatGPT is a state-of-the-art language model able to generate coherent and natural text that can be indistinguishable from text written by a human. As an artificial intelligence, ChatGPT may need help to change academic writing practices. However, it can provide information and guidance on ways to improve people's academic writing skills.
  10. Robertson, S.E.: OKAPI at TREC-1 (1994) 0.01
    0.0053044683 = product of:
      0.042435747 = sum of:
        0.01069133 = weight(_text_:information in 7953) [ClassicSimilarity], result of:
          0.01069133 = score(doc=7953,freq=2.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.19395474 = fieldWeight in 7953, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=7953)
        0.031744417 = weight(_text_:retrieval in 7953) [ClassicSimilarity], result of:
          0.031744417 = score(doc=7953,freq=2.0), product of:
            0.09498371 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.031400457 = queryNorm
            0.33420905 = fieldWeight in 7953, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.078125 = fieldNorm(doc=7953)
      0.125 = coord(2/16)
    
    Abstract
    Describes the work carried out on the TREC-2 project following the results of the TREC-1 project. Experiments were conducted on the OKAPI experimental text information retrieval system which investigated a number of alternative probabilistic term weighting functions in place of the 'standard' Robertson Sparck Jones weighting functions used in TREC-1
  11. Grötschel, M.; Lügger, J.; Sperber, W.: Wissenschaftliches Publizieren und elektronische Fachinformation im Umbruch : ein Situationsbericht aus der Sicht der Mathematik (1993) 0.01
    0.0052258885 = product of:
      0.083614215 = sum of:
        0.083614215 = product of:
          0.16722843 = sum of:
            0.16722843 = weight(_text_:publizieren in 1946) [ClassicSimilarity], result of:
              0.16722843 = score(doc=1946,freq=4.0), product of:
                0.15493481 = queryWeight, product of:
                  4.934158 = idf(docFreq=864, maxDocs=44218)
                  0.031400457 = queryNorm
                1.079347 = fieldWeight in 1946, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.934158 = idf(docFreq=864, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1946)
          0.5 = coord(1/2)
      0.0625 = coord(1/16)
    
    Theme
    Elektronisches Publizieren
  12. Panzer, M.: Dewey Web services : overview (2009) 0.00
    0.0046187136 = product of:
      0.07389942 = sum of:
        0.07389942 = weight(_text_:web in 7190) [ClassicSimilarity], result of:
          0.07389942 = score(doc=7190,freq=2.0), product of:
            0.10247572 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031400457 = queryNorm
            0.72114074 = fieldWeight in 7190, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.15625 = fieldNorm(doc=7190)
      0.0625 = coord(1/16)
    
  13. Jansen, B.; Browne, G.M.: Navigating information spaces : index / mind map / topic map? (2021) 0.00
    0.0042435746 = product of:
      0.033948597 = sum of:
        0.008553064 = weight(_text_:information in 436) [ClassicSimilarity], result of:
          0.008553064 = score(doc=436,freq=2.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.1551638 = fieldWeight in 436, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=436)
        0.025395533 = weight(_text_:retrieval in 436) [ClassicSimilarity], result of:
          0.025395533 = score(doc=436,freq=2.0), product of:
            0.09498371 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.031400457 = queryNorm
            0.26736724 = fieldWeight in 436, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=436)
      0.125 = coord(2/16)
    
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  14. Ockenfeld, M.: MultiMedia Forum : Konzeption und Erprobung einer elektronischen Mitarbeiterzeitung in einer räumlich verteilten Organisation (1994) 0.00
    0.0042231553 = product of:
      0.067570485 = sum of:
        0.067570485 = product of:
          0.13514097 = sum of:
            0.13514097 = weight(_text_:publizieren in 7420) [ClassicSimilarity], result of:
              0.13514097 = score(doc=7420,freq=2.0), product of:
                0.15493481 = queryWeight, product of:
                  4.934158 = idf(docFreq=864, maxDocs=44218)
                  0.031400457 = queryNorm
                0.8722441 = fieldWeight in 7420, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.934158 = idf(docFreq=864, maxDocs=44218)
                  0.125 = fieldNorm(doc=7420)
          0.5 = coord(1/2)
      0.0625 = coord(1/16)
    
    Theme
    Elektronisches Publizieren
  15. Pejtersen, A.M.; Jensen, H.; Speck, P.; Villumsen, S.; Weber, S.: Catalogs for children : the Book House project on visualization of database retrieval and classification (1993) 0.00
    0.003963207 = product of:
      0.031705655 = sum of:
        0.009258964 = weight(_text_:information in 6232) [ClassicSimilarity], result of:
          0.009258964 = score(doc=6232,freq=6.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.16796975 = fieldWeight in 6232, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6232)
        0.022446692 = weight(_text_:retrieval in 6232) [ClassicSimilarity], result of:
          0.022446692 = score(doc=6232,freq=4.0), product of:
            0.09498371 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.031400457 = queryNorm
            0.23632148 = fieldWeight in 6232, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6232)
      0.125 = coord(2/16)
    
    Abstract
    This paper describes the Book House system which is designed to support children's information retrieval in libraries as part of their education. It is a shareware program available on CD-ROM and discs, and comprises functionality for database searching as well as for the classification and storage of book information in the database. The system concept is based on an understanding of children's domain structures and their capabilities for categorization of information needs in connection with their activities in public libraries, in school libraries or in schools. These structures are visualized in the interface by using metaphors and multimedia technology. Through the use of text, images and animation, the Book House supports children - even at a very early age - to learn by doing in an enjoyable way which plays on their previous experiences with computer games. Both words and pictures can be used for searching; this makes the system suitable for all age groups. Even children who have not yet learned to read properly can by selecting pictures search for and find books they would like to have read aloud. Thus at the very beginning of their school period, they can learn to search for books on their own. For the library community itself, such a system will provide an extended service which will increase the number of children's own searches and also improve the relevance, quality and utilization of the collections in the libraries. A market research on the need for an annual indexing service for books in the Book House format is in preparation by the Danish Library Center
  16. Slavic, A.: Interface to classification : some objectives and options (2006) 0.00
    0.0035791197 = product of:
      0.028632957 = sum of:
        0.009586309 = product of:
          0.019172618 = sum of:
            0.019172618 = weight(_text_:online in 2131) [ClassicSimilarity], result of:
              0.019172618 = score(doc=2131,freq=2.0), product of:
                0.09529729 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031400457 = queryNorm
                0.20118743 = fieldWeight in 2131, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2131)
          0.5 = coord(1/2)
        0.01904665 = weight(_text_:retrieval in 2131) [ClassicSimilarity], result of:
          0.01904665 = score(doc=2131,freq=2.0), product of:
            0.09498371 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.031400457 = queryNorm
            0.20052543 = fieldWeight in 2131, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2131)
      0.125 = coord(2/16)
    
    Theme
    Klassifikationssysteme im Online-Retrieval
  17. Gödert, W.: Navigation und Retrieval in Datenbanken und Informationsnetzen (1995) 0.00
    0.0027776365 = product of:
      0.044442184 = sum of:
        0.044442184 = weight(_text_:retrieval in 2113) [ClassicSimilarity], result of:
          0.044442184 = score(doc=2113,freq=2.0), product of:
            0.09498371 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.031400457 = queryNorm
            0.46789268 = fieldWeight in 2113, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.109375 = fieldNorm(doc=2113)
      0.0625 = coord(1/16)
    
  18. Großjohann, K.: Gathering-, Harvesting-, Suchmaschinen (1996) 0.00
    0.0022561986 = product of:
      0.036099177 = sum of:
        0.036099177 = product of:
          0.07219835 = sum of:
            0.07219835 = weight(_text_:22 in 3227) [ClassicSimilarity], result of:
              0.07219835 = score(doc=3227,freq=4.0), product of:
                0.10995905 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.031400457 = queryNorm
                0.6565931 = fieldWeight in 3227, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3227)
          0.5 = coord(1/2)
      0.0625 = coord(1/16)
    
    Date
    7. 2.1996 22:38:41
    Pages
    22 S
  19. Schöneberg, U.; Gödert, W.: Erschließung mathematischer Publikationen mittels linguistischer Verfahren (2012) 0.00
    0.0020475283 = product of:
      0.032760452 = sum of:
        0.032760452 = weight(_text_:software in 1055) [ClassicSimilarity], result of:
          0.032760452 = score(doc=1055,freq=2.0), product of:
            0.124570385 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031400457 = queryNorm
            0.2629875 = fieldWeight in 1055, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=1055)
      0.0625 = coord(1/16)
    
    Abstract
    Die Zahl der mathematik-relevanten Publikationn steigt von Jahr zu Jahr an. Referatedienste wie da Zentralblatt MATH und Mathematical Reviews erfassen die bibliographischen Daten, erschließen die Arbeiten inhaltlich und machen sie - heute über Datenbanken, früher in gedruckter Form - für den Nutzer suchbar. Keywords sind ein wesentlicher Bestandteil der inhaltlichen Erschließung der Publikationen. Keywords sind meist keine einzelnen Wörter, sondern Mehrwortphrasen. Das legt die Anwendung linguistischer Methoden und Verfahren nahe. Die an der FH Köln entwickelte Software 'Lingo' wurde für die speziellen Anforderungen mathematischer Texte angepasst und sowohl zum Aufbau eines kontrollierten Vokabulars als auch zur Extraction von Keywords aus mathematischen Publikationen genutzt. Es ist geplant, über eine Verknüpfung von kontrolliertem Vokabular und der Mathematical Subject Classification Methoden für die automatische Klassifikation für den Referatedienst Zentralblatt MATH zu entwickeln und zu erproben.
  20. Lange, C.; Ion, P.; Dimou, A.; Bratsas, C.; Sperber, W.; Kohlhasel, M.; Antoniou, I.: Getting mathematics towards the Web of Data : the case of the Mathematics Subject Classification (2012) 0.00
    0.0019999617 = product of:
      0.031999387 = sum of:
        0.031999387 = weight(_text_:web in 111) [ClassicSimilarity], result of:
          0.031999387 = score(doc=111,freq=6.0), product of:
            0.10247572 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031400457 = queryNorm
            0.3122631 = fieldWeight in 111, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=111)
      0.0625 = coord(1/16)
    
    Abstract
    The Mathematics Subject Classification (MSC), maintained by the American Mathematical Society's Mathematical Reviews (MR) and FIZ Karlsruhe's Zentralblatt für Mathematik (Zbl), is a scheme for classifying publications in mathematics according to their subjects. While it is widely used, its traditional, idiosyncratic conceptualization and representation requires custom implementations of search, query and annotation support. This did not encourage people to create and explore connections of mathematics to subjects of related domains (e.g. science), and it made the scheme hard to maintain. We have reimplemented the current version of MSC2010 as a Linked Open Dataset using SKOS and our focus is concentrated on turning it into the new MSC authority. This paper explains the motivation, and details of our design considerations and how we realized them in the implementation. We present in-the-field use cases and point out how e-science applications can take advantage of the MSC LOD set. We conclude with a roadmap for bootstrapping the presence of mathematical and mathematics-based science, technology, and engineering knowledge on the Web of Data, where it has been noticeably underrepresented so far, starting from MSC/SKOS as a seed.
    Footnote
    Vgl. auch den publizierten Beitrag u.d.T.: Bringing mathematics towards the Web of Data: the case of the Mathematics Subject Classification

Years

Languages

  • e 22
  • d 11

Types