Search (33 results, page 2 of 2)

  • × language_ss:"e"
  • × type_ss:"p"
  1. Balikova, M.: ¬The national bibliography of a small country in international context (2000) 0.00
    0.0016616598 = product of:
      0.0033233196 = sum of:
        0.0033233196 = product of:
          0.006646639 = sum of:
            0.006646639 = weight(_text_:a in 5397) [ClassicSimilarity], result of:
              0.006646639 = score(doc=5397,freq=2.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.15287387 = fieldWeight in 5397, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5397)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  2. Broughton, V.: ¬A new classification for the literature for religion (2000) 0.00
    0.0016616598 = product of:
      0.0033233196 = sum of:
        0.0033233196 = product of:
          0.006646639 = sum of:
            0.006646639 = weight(_text_:a in 5398) [ClassicSimilarity], result of:
              0.006646639 = score(doc=5398,freq=2.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.15287387 = fieldWeight in 5398, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5398)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  3. Elazar, D.H.: ¬The making of a classification scheme for libraries of Judaica (2000) 0.00
    0.0016616598 = product of:
      0.0033233196 = sum of:
        0.0033233196 = product of:
          0.006646639 = sum of:
            0.006646639 = weight(_text_:a in 5400) [ClassicSimilarity], result of:
              0.006646639 = score(doc=5400,freq=2.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.15287387 = fieldWeight in 5400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5400)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  4. Stoklasova, B.: ¬The national bibliography of a small country in international context (2000) 0.00
    0.0016616598 = product of:
      0.0033233196 = sum of:
        0.0033233196 = product of:
          0.006646639 = sum of:
            0.006646639 = weight(_text_:a in 5415) [ClassicSimilarity], result of:
              0.006646639 = score(doc=5415,freq=2.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.15287387 = fieldWeight in 5415, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5415)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  5. Breuer, T.; Tavakolpoursaleh, N.; Schaer, P.; Hienert, D.; Schaible, J.; Castro, L.J.: Online Information Retrieval Evaluation using the STELLA Framework (2022) 0.00
    0.0016616598 = product of:
      0.0033233196 = sum of:
        0.0033233196 = product of:
          0.006646639 = sum of:
            0.006646639 = weight(_text_:a in 640) [ClassicSimilarity], result of:
              0.006646639 = score(doc=640,freq=8.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.15287387 = fieldWeight in 640, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=640)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Involving users in early phases of software development has become a common strategy as it enables developers to consider user needs from the beginning. Once a system is in production, new opportunities to observe, evaluate and learn from users emerge as more information becomes available. Gathering information from users to continuously evaluate their behavior is a common practice for commercial software, while the Cranfield paradigm remains the preferred option for Information Retrieval (IR) and recommendation systems in the academic world. Here we introduce the Infrastructures for Living Labs STELLA project which aims to create an evaluation infrastructure allowing experimental systems to run along production web-based academic search systems with real users. STELLA combines user interactions and log files analyses to enable large-scale A/B experiments for academic search.
  6. Kemp, A. de: Information provision : a publisher's point of view in changing times and with new technologies (1993) 0.00
    0.0015666279 = product of:
      0.0031332558 = sum of:
        0.0031332558 = product of:
          0.0062665115 = sum of:
            0.0062665115 = weight(_text_:a in 6235) [ClassicSimilarity], result of:
              0.0062665115 = score(doc=6235,freq=4.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.14413087 = fieldWeight in 6235, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6235)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  7. Lange, C.; Ion, P.; Dimou, A.; Bratsas, C.; Sperber, W.; Kohlhasel, M.; Antoniou, I.: Getting mathematics towards the Web of Data : the case of the Mathematics Subject Classification (2012) 0.00
    0.0015481601 = product of:
      0.0030963202 = sum of:
        0.0030963202 = product of:
          0.0061926404 = sum of:
            0.0061926404 = weight(_text_:a in 111) [ClassicSimilarity], result of:
              0.0061926404 = score(doc=111,freq=10.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.14243183 = fieldWeight in 111, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=111)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Mathematics Subject Classification (MSC), maintained by the American Mathematical Society's Mathematical Reviews (MR) and FIZ Karlsruhe's Zentralblatt für Mathematik (Zbl), is a scheme for classifying publications in mathematics according to their subjects. While it is widely used, its traditional, idiosyncratic conceptualization and representation requires custom implementations of search, query and annotation support. This did not encourage people to create and explore connections of mathematics to subjects of related domains (e.g. science), and it made the scheme hard to maintain. We have reimplemented the current version of MSC2010 as a Linked Open Dataset using SKOS and our focus is concentrated on turning it into the new MSC authority. This paper explains the motivation, and details of our design considerations and how we realized them in the implementation. We present in-the-field use cases and point out how e-science applications can take advantage of the MSC LOD set. We conclude with a roadmap for bootstrapping the presence of mathematical and mathematics-based science, technology, and engineering knowledge on the Web of Data, where it has been noticeably underrepresented so far, starting from MSC/SKOS as a seed.
  8. Guizzardi, G.; Guarino, N.: Semantics, ontology and explanation (2023) 0.00
    0.0014390396 = product of:
      0.0028780792 = sum of:
        0.0028780792 = product of:
          0.0057561584 = sum of:
            0.0057561584 = weight(_text_:a in 976) [ClassicSimilarity], result of:
              0.0057561584 = score(doc=976,freq=6.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.13239266 = fieldWeight in 976, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=976)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The terms 'semantics' and 'ontology' are increasingly appearing together with 'explanation', not only in the scientific literature, but also in organizational communication. However, all of these terms are also being significantly overloaded. In this paper, we discuss their strong relation under particular interpretations. Specifically, we discuss a notion of explanation termed ontological unpacking, which aims at explaining symbolic domain descriptions (conceptual models, knowledge graphs, logical specifications) by revealing their ontological commitment in terms of their assumed truthmakers, i.e., the entities in one's ontology that make the propositions in those descriptions true. To illustrate this idea, we employ an ontological theory of relations to explain (by revealing the hidden semantics of) a very simple symbolic model encoded in the standard modeling language UML. We also discuss the essential role played by ontology-driven conceptual models (resulting from this form of explanation processes) in properly supporting semantic interoperability tasks. Finally, we discuss the relation between ontological unpacking and other forms of explanation in philosophy and science, as well as in the area of Artificial Intelligence.
    Type
    a
  9. Pejtersen, A.M.; Jensen, H.; Speck, P.; Villumsen, S.; Weber, S.: Catalogs for children : the Book House project on visualization of database retrieval and classification (1993) 0.00
    0.0013847164 = product of:
      0.0027694327 = sum of:
        0.0027694327 = product of:
          0.0055388655 = sum of:
            0.0055388655 = weight(_text_:a in 6232) [ClassicSimilarity], result of:
              0.0055388655 = score(doc=6232,freq=8.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.12739488 = fieldWeight in 6232, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6232)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper describes the Book House system which is designed to support children's information retrieval in libraries as part of their education. It is a shareware program available on CD-ROM and discs, and comprises functionality for database searching as well as for the classification and storage of book information in the database. The system concept is based on an understanding of children's domain structures and their capabilities for categorization of information needs in connection with their activities in public libraries, in school libraries or in schools. These structures are visualized in the interface by using metaphors and multimedia technology. Through the use of text, images and animation, the Book House supports children - even at a very early age - to learn by doing in an enjoyable way which plays on their previous experiences with computer games. Both words and pictures can be used for searching; this makes the system suitable for all age groups. Even children who have not yet learned to read properly can by selecting pictures search for and find books they would like to have read aloud. Thus at the very beginning of their school period, they can learn to search for books on their own. For the library community itself, such a system will provide an extended service which will increase the number of children's own searches and also improve the relevance, quality and utilization of the collections in the libraries. A market research on the need for an annual indexing service for books in the Book House format is in preparation by the Danish Library Center
  10. Robertson, S.E.: OKAPI at TREC-1 (1994) 0.00
    0.0013847164 = product of:
      0.0027694327 = sum of:
        0.0027694327 = product of:
          0.0055388655 = sum of:
            0.0055388655 = weight(_text_:a in 7953) [ClassicSimilarity], result of:
              0.0055388655 = score(doc=7953,freq=2.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.12739488 = fieldWeight in 7953, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=7953)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Describes the work carried out on the TREC-2 project following the results of the TREC-1 project. Experiments were conducted on the OKAPI experimental text information retrieval system which investigated a number of alternative probabilistic term weighting functions in place of the 'standard' Robertson Sparck Jones weighting functions used in TREC-1
  11. Robertson, S.E.: OKAPI at TREC-3 (1995) 0.00
    0.0013707994 = product of:
      0.0027415988 = sum of:
        0.0027415988 = product of:
          0.0054831975 = sum of:
            0.0054831975 = weight(_text_:a in 5694) [ClassicSimilarity], result of:
              0.0054831975 = score(doc=5694,freq=4.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.12611452 = fieldWeight in 5694, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5694)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Reports text information retrieval experiments performed as part of the 3 rd round of Text Retrieval Conferences (TREC) using the Okapi online catalogue system at City University, UK. The emphasis in TREC-3 was: further refinement of term weighting functions; an investigation of run time passage determination and searching; expansion of ad hoc queries by terms extracted from the top documents retrieved by a trial search; new methods for choosing query expansion terms after relevance feedback, now split into methods of ranking terms prior to selection and subsequent selection procedures; and the development of a user interface procedure within the new TREC interactive search framework
  12. Zhai, X.: ChatGPT user experience: : implications for education (2022) 0.00
    0.0011991997 = product of:
      0.0023983994 = sum of:
        0.0023983994 = product of:
          0.004796799 = sum of:
            0.004796799 = weight(_text_:a in 849) [ClassicSimilarity], result of:
              0.004796799 = score(doc=849,freq=6.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.11032722 = fieldWeight in 849, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=849)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    ChatGPT, a general-purpose conversation chatbot released on November 30, 2022, by OpenAI, is expected to impact every aspect of society. However, the potential impacts of this NLP tool on education remain unknown. Such impact can be enormous as the capacity of ChatGPT may drive changes to educational learning goals, learning activities, and assessment and evaluation practices. This study was conducted by piloting ChatGPT to write an academic paper, titled Artificial Intelligence for Education (see Appendix A). The piloting result suggests that ChatGPT is able to help researchers write a paper that is coherent, (partially) accurate, informative, and systematic. The writing is extremely efficient (2-3 hours) and involves very limited professional knowledge from the author. Drawing upon the user experience, I reflect on the potential impacts of ChatGPT, as well as similar AI tools, on education. The paper concludes by suggesting adjusting learning goals-students should be able to use AI tools to conduct subject-domain tasks and education should focus on improving students' creativity and critical thinking rather than general skills. To accomplish the learning goals, researchers should design AI-involved learning tasks to engage students in solving real-world problems. ChatGPT also raises concerns that students may outsource assessment tasks. This paper concludes that new formats of assessments are needed to focus on creativity and critical thinking that AI cannot substitute.
  13. Isaac, A.; Raemy, J.A.; Meijers, E.; Valk, S. De; Freire, N.: Metadata aggregation via linked data : results of the Europeana Common Culture project (2020) 0.00
    8.308299E-4 = product of:
      0.0016616598 = sum of:
        0.0016616598 = product of:
          0.0033233196 = sum of:
            0.0033233196 = weight(_text_:a in 39) [ClassicSimilarity], result of:
              0.0033233196 = score(doc=39,freq=2.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.07643694 = fieldWeight in 39, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=39)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    

Years

Types