Search (786 results, page 1 of 40)

  • × type_ss:"el"
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.11
    0.10585217 = product of:
      0.15877825 = sum of:
        0.13210547 = product of:
          0.39631638 = sum of:
            0.39631638 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.39631638 = score(doc=1826,freq=2.0), product of:
                0.42309996 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04990557 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
        0.026672786 = weight(_text_:on in 1826) [ClassicSimilarity], result of:
          0.026672786 = score(doc=1826,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.24300331 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.6666667 = coord(2/3)
    
    Content
    Präsentation anlässlich: European Conference on Data Analysis (ECDA 2014) in Bremen, Germany, July 2nd to 4th 2014, LIS-Workshop.
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.08
    0.084681734 = product of:
      0.1270226 = sum of:
        0.10568436 = product of:
          0.31705308 = sum of:
            0.31705308 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.31705308 = score(doc=230,freq=2.0), product of:
                0.42309996 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04990557 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
        0.021338228 = weight(_text_:on in 230) [ClassicSimilarity], result of:
          0.021338228 = score(doc=230,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.19440265 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
      0.6666667 = coord(2/3)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  3. British Library / FAST/Dewey Review Group: Consultation on subject indexing and classification standards applied by the British Library (2015) 0.06
    0.06424023 = product of:
      0.09636035 = sum of:
        0.032007344 = weight(_text_:on in 2810) [ClassicSimilarity], result of:
          0.032007344 = score(doc=2810,freq=8.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.29160398 = fieldWeight in 2810, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=2810)
        0.064353004 = product of:
          0.12870601 = sum of:
            0.12870601 = weight(_text_:demand in 2810) [ClassicSimilarity], result of:
              0.12870601 = score(doc=2810,freq=2.0), product of:
                0.31127608 = queryWeight, product of:
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.04990557 = queryNorm
                0.4134786 = fieldWeight in 2810, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2810)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    A broad-based review of the subject and classification schemes used on British Library records began in late 2014. The review was undertaken in response to a number of drivers including: - An increasing demand on available resources due to the rapidly expanding digital publishing arena, and continuing steady state in print publication patterns - Increased demands on metadata to meet changing audience expectations.
  4. Wong, W.; Liu, W.; Bennamoun, M.: Ontology learning from text : a look back and into the future (2010) 0.06
    0.062499635 = product of:
      0.09374945 = sum of:
        0.01867095 = weight(_text_:on in 4733) [ClassicSimilarity], result of:
          0.01867095 = score(doc=4733,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.17010231 = fieldWeight in 4733, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4733)
        0.0750785 = product of:
          0.150157 = sum of:
            0.150157 = weight(_text_:demand in 4733) [ClassicSimilarity], result of:
              0.150157 = score(doc=4733,freq=2.0), product of:
                0.31127608 = queryWeight, product of:
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.04990557 = queryNorm
                0.48239172 = fieldWeight in 4733, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4733)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Ontologies are often viewed as the answer to the need for inter-operable semantics in modern information systems. The explosion of textual information on the "Read/Write" Web coupled with the increasing demand for ontologies to power the Semantic Web have made (semi-)automatic ontology learning from text a very promising research area. This together with the advanced state in related areas such as natural language processing have fuelled research into ontology learning over the past decade. This survey looks at how far we have come since the turn of the millennium, and discusses the remaining challenges that will define the research directions in this area in the near future.
  5. Cathro, W.: New frameworks for resource discovery and delivery : the changing role of the catalogue (2006) 0.05
    0.04832534 = product of:
      0.07248801 = sum of:
        0.01886051 = weight(_text_:on in 6107) [ClassicSimilarity], result of:
          0.01886051 = score(doc=6107,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.1718293 = fieldWeight in 6107, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6107)
        0.053627502 = product of:
          0.107255004 = sum of:
            0.107255004 = weight(_text_:demand in 6107) [ClassicSimilarity], result of:
              0.107255004 = score(doc=6107,freq=2.0), product of:
                0.31127608 = queryWeight, product of:
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.04990557 = queryNorm
                0.3445655 = fieldWeight in 6107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6107)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    There is currently a lively debate about the role of the library catalogue and its relationship to other resource discovery tools. An example of this debate is the recent publication of a report commissioned by the Library of Congress on "the changing nature of the catalogue" As part of this debate, the role of union catalogues is also being re-examined. Some commentators have suggested that union catalogues, by virtue of their size, can aggregate both supply and demand, thus increasing the chance that a relatively little-used resource will be discovered by somebody for whom it is relevant. During the past year, the National Library of Australia (NLA) has been considering the future of its catalogue and its role in the resource discovery and delivery process. The review was prompted, in part, by the redevelopment of the Australian union catalogue and its exposure on the web as a free public service, badged as Libraries Australia. The NLA examined the enablers and inhibitors to proposition "that it replace its catalogue with Libraries Australia, as the primary database to be searched by users". Flowing from this review, the NLA is aiming to undertake a number of tasks to move in the medium to long term towards a scenario in which it could deprecate its local catalogue. Bezug zum Calhoun-Report
  6. Neumann, M.; Steinberg, J.; Schaer, P.: Web-ccraping for non-programmers : introducing OXPath for digital library metadata harvesting (2017) 0.05
    0.04832534 = product of:
      0.07248801 = sum of:
        0.01886051 = weight(_text_:on in 3895) [ClassicSimilarity], result of:
          0.01886051 = score(doc=3895,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.1718293 = fieldWeight in 3895, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3895)
        0.053627502 = product of:
          0.107255004 = sum of:
            0.107255004 = weight(_text_:demand in 3895) [ClassicSimilarity], result of:
              0.107255004 = score(doc=3895,freq=2.0), product of:
                0.31127608 = queryWeight, product of:
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.04990557 = queryNorm
                0.3445655 = fieldWeight in 3895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3895)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Building up new collections for digital libraries is a demanding task. Available data sets have to be extracted which is usually done with the help of software developers as it involves custom data handlers or conversion scripts. In cases where the desired data is only available on the data provider's website custom web scrapers are needed. This may be the case for small to medium-size publishers, research institutes or funding agencies. As data curation is a typical task that is done by people with a library and information science background, these people are usually proficient with XML technologies but are not full-stack programmers. Therefore we would like to present a web scraping tool that does not demand the digital library curators to program custom web scrapers from scratch. We present the open-source tool OXPath, an extension of XPath, that allows the user to define data to be extracted from websites in a declarative way. By taking one of our own use cases as an example, we guide you in more detail through the process of creating an OXPath wrapper for metadata harvesting. We also point out some practical things to consider when creating a web scraper (with OXPath). On top of that, we also present a syntax highlighting plugin for the popular text editor Atom that we developed to further support OXPath users and to simplify the authoring process.
  7. Guidi, F.; Sacerdoti Coen, C.: ¬A survey on retrieval of mathematical knowledge (2015) 0.05
    0.047685735 = product of:
      0.0715286 = sum of:
        0.03772102 = weight(_text_:on in 5865) [ClassicSimilarity], result of:
          0.03772102 = score(doc=5865,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.3436586 = fieldWeight in 5865, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.078125 = fieldNorm(doc=5865)
        0.03380758 = product of:
          0.06761516 = sum of:
            0.06761516 = weight(_text_:22 in 5865) [ClassicSimilarity], result of:
              0.06761516 = score(doc=5865,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.38690117 = fieldWeight in 5865, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5865)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    We present a short survey of the literature on indexing and retrieval of mathematical knowledge, with pointers to 72 papers and tentative taxonomies of both retrieval problems and recurring techniques.
    Date
    22. 2.2017 12:51:57
  8. Martínez-González, M.M.; Alvite-Díez, M.L.: Thesauri and Semantic Web : discussion of the evolution of thesauri toward their integration with the Semantic Web (2019) 0.04
    0.044642597 = product of:
      0.066963896 = sum of:
        0.013336393 = weight(_text_:on in 5997) [ClassicSimilarity], result of:
          0.013336393 = score(doc=5997,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.121501654 = fieldWeight in 5997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5997)
        0.053627502 = product of:
          0.107255004 = sum of:
            0.107255004 = weight(_text_:demand in 5997) [ClassicSimilarity], result of:
              0.107255004 = score(doc=5997,freq=2.0), product of:
                0.31127608 = queryWeight, product of:
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.04990557 = queryNorm
                0.3445655 = fieldWeight in 5997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5997)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Thesauri are Knowledge Organization Systems (KOS), that arise from the consensus of wide communities. They have been in use for many years and are regularly updated. Whereas in the past thesauri were designed for information professionals for indexing and searching, today there is a demand for conceptual vocabularies that enable inferencing by machines. The development of the Semantic Web has brought a new opportunity for thesauri, but thesauri also face the challenge of proving that they add value to it. The evolution of thesauri toward their integration with the Semantic Web is examined. Elements and structures in the thesaurus standard, ISO 25964, and SKOS (Simple Knowledge Organization System), the Semantic Web standard for representing KOS, are reviewed and compared. Moreover, the integrity rules of thesauri are contrasted with the axioms of SKOS. How SKOS has been applied to represent some real thesauri is taken into account. Three thesauri are chosen for this aim: AGROVOC, EuroVoc and the UNESCO Thesaurus. Based on the results of this comparison and analysis, the benefits that Semantic Web technologies offer to thesauri, how thesauri can contribute to the Semantic Web, and the challenges that would help to improve their integration with the Semantic Web are discussed.
  9. Brinkman's cumulative catalogue on CD-ROM (1996-) 0.04
    0.040320244 = product of:
      0.060480364 = sum of:
        0.026672786 = weight(_text_:on in 6474) [ClassicSimilarity], result of:
          0.026672786 = score(doc=6474,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.24300331 = fieldWeight in 6474, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.078125 = fieldNorm(doc=6474)
        0.03380758 = product of:
          0.06761516 = sum of:
            0.06761516 = weight(_text_:22 in 6474) [ClassicSimilarity], result of:
              0.06761516 = score(doc=6474,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.38690117 = fieldWeight in 6474, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6474)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    16. 2.1997 16:22:51
  10. Tudhope, D.; Hodge, G.: Terminology registries (2007) 0.04
    0.040320244 = product of:
      0.060480364 = sum of:
        0.026672786 = weight(_text_:on in 539) [ClassicSimilarity], result of:
          0.026672786 = score(doc=539,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.24300331 = fieldWeight in 539, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.078125 = fieldNorm(doc=539)
        0.03380758 = product of:
          0.06761516 = sum of:
            0.06761516 = weight(_text_:22 in 539) [ClassicSimilarity], result of:
              0.06761516 = score(doc=539,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.38690117 = fieldWeight in 539, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=539)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    A discussion on current initiatives regarding terminology registries.
    Date
    26.12.2011 13:22:07
  11. Sojka, P.; Liska, M.: ¬The art of mathematics retrieval (2011) 0.04
    0.039915007 = product of:
      0.059872508 = sum of:
        0.026404712 = weight(_text_:on in 3450) [ClassicSimilarity], result of:
          0.026404712 = score(doc=3450,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.24056101 = fieldWeight in 3450, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3450)
        0.033467796 = product of:
          0.06693559 = sum of:
            0.06693559 = weight(_text_:22 in 3450) [ClassicSimilarity], result of:
              0.06693559 = score(doc=3450,freq=4.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.38301262 = fieldWeight in 3450, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3450)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The design and architecture of MIaS (Math Indexer and Searcher), a system for mathematics retrieval is presented, and design decisions are discussed. We argue for an approach based on Presentation MathML using a similarity of math subformulae. The system was implemented as a math-aware search engine based on the state-ofthe-art system Apache Lucene. Scalability issues were checked against more than 400,000 arXiv documents with 158 million mathematical formulae. Almost three billion MathML subformulae were indexed using a Solr-compatible Lucene.
    Content
    Vgl.: DocEng2011, September 19-22, 2011, Mountain View, California, USA Copyright 2011 ACM 978-1-4503-0863-2/11/09
    Date
    22. 2.2017 13:00:42
  12. Voß, J.: Classification of knowledge organization systems with Wikidata (2016) 0.04
    0.037379898 = product of:
      0.056069843 = sum of:
        0.0357853 = weight(_text_:on in 3082) [ClassicSimilarity], result of:
          0.0357853 = score(doc=3082,freq=10.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.32602316 = fieldWeight in 3082, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=3082)
        0.020284547 = product of:
          0.040569093 = sum of:
            0.040569093 = weight(_text_:22 in 3082) [ClassicSimilarity], result of:
              0.040569093 = score(doc=3082,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.23214069 = fieldWeight in 3082, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3082)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper presents a crowd-sourced classification of knowledge organization systems based on open knowledge base Wikidata. The focus is less on the current result in its rather preliminary form but on the environment and process of categorization in Wikidata and the extraction of KOS from the collaborative database. Benefits and disadvantages are summarized and discussed for application to knowledge organization of other subject areas with Wikidata.
    Content
    Vgl.: http://ceur-ws.org/Vol-1676/paper2.pdf. Other workshop material incl. presentations are available on the website < https://at-web1.comp.glam.ac.uk/pages/research/hypermedia/nkos/nkos2016/programme.html>.
    Pages
    S.15-22
    Source
    Proceedings of the 15th European Networked Knowledge Organization Systems Workshop (NKOS 2016) co-located with the 20th International Conference on Theory and Practice of Digital Libraries 2016 (TPDL 2016), Hannover, Germany, September 9, 2016. Edi. by Philipp Mayr et al. [http://ceur-ws.org/Vol-1676/=urn:nbn:de:0074-1676-5]
  13. Heflin, J.; Hendler, J.: Semantic interoperability on the Web (2000) 0.04
    0.03733623 = product of:
      0.05600434 = sum of:
        0.032339036 = weight(_text_:on in 759) [ClassicSimilarity], result of:
          0.032339036 = score(doc=759,freq=6.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.29462588 = fieldWeight in 759, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=759)
        0.023665305 = product of:
          0.04733061 = sum of:
            0.04733061 = weight(_text_:22 in 759) [ClassicSimilarity], result of:
              0.04733061 = score(doc=759,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.2708308 = fieldWeight in 759, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=759)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    XML will have a profound impact on the way data is exchanged on the Internet. An important feature of this language is the separation of content from presentation, which makes it easier to select and/or reformat the data. However, due to the likelihood of numerous industry and domain specific DTDs, those who wish to integrate information will still be faced with the problem of semantic interoperability. In this paper we discuss why this problem is not solved by XML, and then discuss why the Resource Description Framework is only a partial solution. We then present the SHOE language, which we feel has many of the features necessary to enable a semantic web, and describe an existing set of tools that make it easy to use the language.
    Date
    11. 5.2013 19:22:18
  14. Díaz, P.: Usability of hypermedia educational e-books (2003) 0.04
    0.035714075 = product of:
      0.053571112 = sum of:
        0.010669114 = weight(_text_:on in 1198) [ClassicSimilarity], result of:
          0.010669114 = score(doc=1198,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.097201325 = fieldWeight in 1198, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=1198)
        0.042902 = product of:
          0.085804 = sum of:
            0.085804 = weight(_text_:demand in 1198) [ClassicSimilarity], result of:
              0.085804 = score(doc=1198,freq=2.0), product of:
                0.31127608 = queryWeight, product of:
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.04990557 = queryNorm
                0.2756524 = fieldWeight in 1198, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1198)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    To arrive at relevant and reliable conclusions concerning the usability of a hypermedia educational e-book, developers have to apply a well-defined evaluation procedure as well as a set of clear, concrete and measurable quality criteria. Evaluating an educational tool involves not only testing the user interface but also the didactic method, the instructional materials and the interaction mechanisms to prove whether or not they help users reach their goals for learning. This article presents a number of evaluation criteria for hypermedia educational e-books and describes how they are embedded into an evaluation procedure. This work is chiefly aimed at helping education developers evaluate their systems, as well as to provide them with guidance for addressing educational requirements during the design process. In recent years, more and more educational e-books are being created, whether by academics trying to keep pace with the advanced requirements of the virtual university or by publishers seeking to meet the increasing demand for educational resources that can be accessed anywhere and anytime, and that include multimedia information, hypertext links and powerful search and annotating mechanisms. To develop a useful educational e-book many things have to be considered, such as the reading patterns of users, accessibility for different types of users and computer platforms, copyright and legal issues, development of new business models and so on. Addressing usability is very important since e-books are interactive systems and, consequently, have to be designed with the needs of their users in mind. Evaluating usability involves analyzing whether systems are effective, efficient and secure for use; easy to learn and remember; and have a good utility. Any interactive system, as e-books are, has to be assessed to determine if it is really usable as well as useful. Such an evaluation is not only concerned with assessing the user interface but is also aimed at analyzing whether the system can be used in an efficient way to meet the needs of its users - who in the case of educational e-books are learners and teachers. Evaluation provides the opportunity to gather valuable information about design decisions. However, to be successful the evaluation has to be carefully planned and prepared so developers collect appropriate and reliable data from which to draw relevant conclusions.
  15. Birmingham, W.; Pardo, B.; Meek, C.; Shifrin, J.: ¬The MusArt music-retrieval system (2002) 0.04
    0.035714075 = product of:
      0.053571112 = sum of:
        0.010669114 = weight(_text_:on in 1205) [ClassicSimilarity], result of:
          0.010669114 = score(doc=1205,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.097201325 = fieldWeight in 1205, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=1205)
        0.042902 = product of:
          0.085804 = sum of:
            0.085804 = weight(_text_:demand in 1205) [ClassicSimilarity], result of:
              0.085804 = score(doc=1205,freq=2.0), product of:
                0.31127608 = queryWeight, product of:
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.04990557 = queryNorm
                0.2756524 = fieldWeight in 1205, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1205)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Music websites are ubiquitous, and music downloads, such as MP3, are a major source of Web traffic. As the amount of musical content increases and the Web becomes an important mechanism for distributing music, we expect to see a rising demand for music search services. Many currently available music search engines rely on file names, song title, composer or performer as the indexing and retrieval mechanism. These systems do not make use of the musical content. We believe that a more natural, effective, and usable music-information retrieval (MIR) system should have audio input, where the user can query with musical content. We are developing a system called MusArt for audio-input MIR. With MusArt, as with other audio-input MIR systems, a user sings or plays a theme, hook, or riff from the desired piece of music. The system transcribes the query and searches for related themes in a database, returning the most similar themes, given some measure of similarity. We call this "retrieval by query." In this paper, we describe the architecture of MusArt. An important element of MusArt is metadata creation: we believe that it is essential to automatically abstract important musical elements, particularly themes. Theme extraction is performed by a subsystem called MME, which we describe later in this paper. Another important element of MusArt is its support for a variety of search engines, as we believe that MIR is too complex for a single approach to work for all queries. Currently, MusArt supports a dynamic time-warping search engine that has high recall, and a complementary stochastic search engine that searches over themes, emphasizing speed and relevancy. The stochastic search engine is discussed in this paper.
  16. Neubauer, W.: ¬The Knowledge portal or the vision of easy access to information (2009) 0.04
    0.035714075 = product of:
      0.053571112 = sum of:
        0.010669114 = weight(_text_:on in 2812) [ClassicSimilarity], result of:
          0.010669114 = score(doc=2812,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.097201325 = fieldWeight in 2812, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=2812)
        0.042902 = product of:
          0.085804 = sum of:
            0.085804 = weight(_text_:demand in 2812) [ClassicSimilarity], result of:
              0.085804 = score(doc=2812,freq=2.0), product of:
                0.31127608 = queryWeight, product of:
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.04990557 = queryNorm
                0.2756524 = fieldWeight in 2812, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2812)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    From a quantitative and qualitative point of view the ETH Library is offering its users an extensive choice of information services. In this respect all researchers, all scientists and also all students have access to nearly all relevant information. This is one side of the coin. On the other hand, this broad, but also heterogeneous bundle of information sources has disadvantages, which should not be underestimated: The more information services and information channels you have, the more complex is it to find what you want to get for your scientific work. A portal-like integration of all the different information resources is still missing. The vision, the main goal of the project "Knowledge Portal" is, to develop a central access system in terms of a "single-point-of-access" for all electronic information services. This means, that all these sources - from the library's catalogue and the fulltext inhouse applications to external, licensed sources - should be accessible via one central Web service. Although the primary target group for this vision is the science community of the ETH Zurich, the interested public should also be taken into account, for the library has also a nation-wide responsibility.The general idea to launch a complex project like that comes from a survey the library did one and a half years ago. We asked a defined sample of scientists what they expected to get from their library and one constant answer was, that they wanted to have one point of access to all the electronic library services and besides this, the search processes should be as simple as possible. We accepted this demand as an order to develop a "single-point-of-access" to all electronic services the library provides. The presentation gives an overview about the general idea of the project and describes the current status.
  17. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.03
    0.034212865 = product of:
      0.051319294 = sum of:
        0.02263261 = weight(_text_:on in 1967) [ClassicSimilarity], result of:
          0.02263261 = score(doc=1967,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.20619515 = fieldWeight in 1967, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=1967)
        0.028686684 = product of:
          0.057373367 = sum of:
            0.057373367 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
              0.057373367 = score(doc=1967,freq=4.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.32829654 = fieldWeight in 1967, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1967)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
  18. Baeza-Yates, R.; Boldi, P.; Castillo, C.: Generalizing PageRank : damping functions for linkbased ranking algorithms (2006) 0.03
    0.033047434 = product of:
      0.04957115 = sum of:
        0.032667357 = weight(_text_:on in 2565) [ClassicSimilarity], result of:
          0.032667357 = score(doc=2565,freq=12.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.29761705 = fieldWeight in 2565, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2565)
        0.01690379 = product of:
          0.03380758 = sum of:
            0.03380758 = weight(_text_:22 in 2565) [ClassicSimilarity], result of:
              0.03380758 = score(doc=2565,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.19345059 = fieldWeight in 2565, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2565)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper introduces a family of link-based ranking algorithms that propagate page importance through links. In these algorithms there is a damping function that decreases with distance, so a direct link implies more endorsement than a link through a long path. PageRank is the most widely known ranking function of this family. The main objective of this paper is to determine whether this family of ranking techniques has some interest per se, and how different choices for the damping function impact on rank quality and on convergence speed. Even though our results suggest that PageRank can be approximated with other simpler forms of rankings that may be computed more efficiently, our focus is of more speculative nature, in that it aims at separating the kernel of PageRank, that is, link-based importance propagation, from the way propagation decays over paths. We focus on three damping functions, having linear, exponential, and hyperbolic decay on the lengths of the paths. The exponential decay corresponds to PageRank, and the other functions are new. Our presentation includes algorithms, analysis, comparisons and experiments that study their behavior under different parameters in real Web graph data. Among other results, we show how to calculate a linear approximation that induces a page ordering that is almost identical to PageRank's using a fixed small number of iterations; comparisons were performed using Kendall's tau on large domain datasets.
    Date
    16. 1.2016 10:22:28
    Source
    http://chato.cl/papers/baeza06_general_pagerank_damping_functions_link_ranking.pdf [Proceedings of the ACM Special Interest Group on Information Retrieval (SIGIR) Conference, SIGIR'06, August 6-10, 2006, Seattle, Washington, USA]
  19. Faro, S.; Francesconi, E.; Marinai, E.; Sandrucci, V.: Report on execution and results of the interoperability tests (2008) 0.03
    0.032256197 = product of:
      0.048384294 = sum of:
        0.021338228 = weight(_text_:on in 7411) [ClassicSimilarity], result of:
          0.021338228 = score(doc=7411,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.19440265 = fieldWeight in 7411, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=7411)
        0.027046064 = product of:
          0.054092128 = sum of:
            0.054092128 = weight(_text_:22 in 7411) [ClassicSimilarity], result of:
              0.054092128 = score(doc=7411,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.30952093 = fieldWeight in 7411, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7411)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    7.11.2008 10:40:22
  20. Van Dijck, P.: Introduction to XFML (2003) 0.03
    0.032256197 = product of:
      0.048384294 = sum of:
        0.021338228 = weight(_text_:on in 2474) [ClassicSimilarity], result of:
          0.021338228 = score(doc=2474,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.19440265 = fieldWeight in 2474, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=2474)
        0.027046064 = product of:
          0.054092128 = sum of:
            0.054092128 = weight(_text_:22 in 2474) [ClassicSimilarity], result of:
              0.054092128 = score(doc=2474,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.30952093 = fieldWeight in 2474, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2474)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Van Dijck builds up an example of actual XFML by showing how to organize tourist information about what restaurants in what cities feature which kind of music: <facet id="city">City</facet> and <topic id="ny" facetid="city"><name>New York</name></topic> combine to mean that New York is the name of a city internally represented as "ny". It is written in the usual clear and practical style of articles on xml.com. Highly recommended as an introduction for anyone interested in XFML.
    Source
    http://www.xml.com/lpt/a/2003/01/22/xfml.html

Years

Languages

  • e 653
  • d 105
  • a 8
  • i 5
  • el 3
  • f 1
  • nl 1
  • sp 1
  • More… Less…

Types

  • a 390
  • r 17
  • s 16
  • i 14
  • n 10
  • m 8
  • x 8
  • p 7
  • b 6
  • More… Less…

Themes