Search (71 results, page 1 of 4)

  • × language_ss:"e"
  • × type_ss:"a"
  • × type_ss:"el"
  1. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.28
    0.27778324 = product of:
      0.5555665 = sum of:
        0.07936664 = product of:
          0.23809992 = sum of:
            0.23809992 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.23809992 = score(doc=230,freq=2.0), product of:
                0.3177388 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03747799 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
        0.23809992 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.23809992 = score(doc=230,freq=2.0), product of:
            0.3177388 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03747799 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
        0.23809992 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.23809992 = score(doc=230,freq=2.0), product of:
            0.3177388 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03747799 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
      0.5 = coord(3/6)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  2. Baeza-Yates, R.; Boldi, P.; Castillo, C.: Generalizing PageRank : damping functions for linkbased ranking algorithms (2006) 0.04
    0.043203976 = product of:
      0.12961192 = sum of:
        0.121149 = weight(_text_:ranking in 2565) [ClassicSimilarity], result of:
          0.121149 = score(doc=2565,freq=8.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.5976189 = fieldWeight in 2565, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2565)
        0.008462917 = product of:
          0.025388751 = sum of:
            0.025388751 = weight(_text_:22 in 2565) [ClassicSimilarity], result of:
              0.025388751 = score(doc=2565,freq=2.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.19345059 = fieldWeight in 2565, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2565)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper introduces a family of link-based ranking algorithms that propagate page importance through links. In these algorithms there is a damping function that decreases with distance, so a direct link implies more endorsement than a link through a long path. PageRank is the most widely known ranking function of this family. The main objective of this paper is to determine whether this family of ranking techniques has some interest per se, and how different choices for the damping function impact on rank quality and on convergence speed. Even though our results suggest that PageRank can be approximated with other simpler forms of rankings that may be computed more efficiently, our focus is of more speculative nature, in that it aims at separating the kernel of PageRank, that is, link-based importance propagation, from the way propagation decays over paths. We focus on three damping functions, having linear, exponential, and hyperbolic decay on the lengths of the paths. The exponential decay corresponds to PageRank, and the other functions are new. Our presentation includes algorithms, analysis, comparisons and experiments that study their behavior under different parameters in real Web graph data. Among other results, we show how to calculate a linear approximation that induces a page ordering that is almost identical to PageRank's using a fixed small number of iterations; comparisons were performed using Kendall's tau on large domain datasets.
    Date
    16. 1.2016 10:22:28
  3. Boldi, P.; Santini, M.; Vigna, S.: PageRank as a function of the damping factor (2005) 0.02
    0.023012474 = product of:
      0.06903742 = sum of:
        0.0605745 = weight(_text_:ranking in 2564) [ClassicSimilarity], result of:
          0.0605745 = score(doc=2564,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.29880944 = fieldWeight in 2564, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2564)
        0.008462917 = product of:
          0.025388751 = sum of:
            0.025388751 = weight(_text_:22 in 2564) [ClassicSimilarity], result of:
              0.025388751 = score(doc=2564,freq=2.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.19345059 = fieldWeight in 2564, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2564)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    PageRank is defined as the stationary state of a Markov chain. The chain is obtained by perturbing the transition matrix induced by a web graph with a damping factor alpha that spreads uniformly part of the rank. The choice of alpha is eminently empirical, and in most cases the original suggestion alpha=0.85 by Brin and Page is still used. Recently, however, the behaviour of PageRank with respect to changes in alpha was discovered to be useful in link-spam detection. Moreover, an analytical justification of the value chosen for alpha is still missing. In this paper, we give the first mathematical analysis of PageRank when alpha changes. In particular, we show that, contrarily to popular belief, for real-world graphs values of alpha close to 1 do not give a more meaningful ranking. Then, we give closed-form formulae for PageRank derivatives of any order, and an extension of the Power Method that approximates them with convergence O(t**k*alpha**t) for the k-th derivative. Finally, we show a tight connection between iterated computation and analytical behaviour by proving that the k-th iteration of the Power Method gives exactly the PageRank value obtained using a Maclaurin polynomial of degree k. The latter result paves the way towards the application of analytical methods to the study of PageRank.
    Date
    16. 1.2016 10:22:28
  4. Janée, G.; Frew, J.; Hill, L.L.: Issues in georeferenced digital libraries (2004) 0.01
    0.014134051 = product of:
      0.084804304 = sum of:
        0.084804304 = weight(_text_:ranking in 1165) [ClassicSimilarity], result of:
          0.084804304 = score(doc=1165,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.4183332 = fieldWeight in 1165, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1165)
      0.16666667 = coord(1/6)
    
    Abstract
    Based on a decade's experience with the Alexandria Digital Library Project, seven issues are presented that arise in creating georeferenced digital libraries, and that appear to be intrinsic to the problem of creating any library-like information system that operates on georeferenced and geospatial resources. The first and foremost issue is providing discovery of georeferenced resources. Related to discovery are the issues of gazetteer integration and specialized ranking of search results. Strong data typing and scalability are implementation issues. Providing spatial context is a critical user interface issue. Finally, sophisticated resource access mechanisms are necessary to operate on geospatial resources.
  5. Hoffmann, P.; Médini and , L.; Ghodous, P.: Using context to improve semantic interoperability (2006) 0.01
    0.014134051 = product of:
      0.084804304 = sum of:
        0.084804304 = weight(_text_:ranking in 4434) [ClassicSimilarity], result of:
          0.084804304 = score(doc=4434,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.4183332 = fieldWeight in 4434, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4434)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper presents an approach to enhance interoperability between heterogeneous ontologies. It consists in adapting the ranking of concepts to the final users and their work context. The computations are based on an upper domain ontology, a task hierarchy and a user profile. As prerequisites, OWL ontologie have to be given, and an articulation ontology has to be built.
  6. Whitney , C.; Schiff, L.: ¬The Melvyl Recommender Project : developing library recommendation services (2006) 0.01
    0.0121149 = product of:
      0.0726894 = sum of:
        0.0726894 = weight(_text_:ranking in 1173) [ClassicSimilarity], result of:
          0.0726894 = score(doc=1173,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.35857132 = fieldWeight in 1173, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=1173)
      0.16666667 = coord(1/6)
    
    Abstract
    Popular commercial on-line services such as Google, e-Bay, Amazon, and Netflix have evolved quickly over the last decade to help people find what they want, developing information retrieval strategies such as usefully ranked results, spelling correction, and recommender systems. Online library catalogs (OPACs), in contrast, have changed little and are notoriously difficult for patrons to use (University of California Libraries, 2005). Over the past year (June 2005 to the present), the Melvyl Recommender Project (California Digital Library, 2005) has been exploring methods and feasibility of closing the gap between features that library patrons want and have come to expect from information retrieval systems and what libraries are currently equipped to deliver. The project team conducted exploratory work in five topic areas: relevance ranking, auto-correction, use of a text-based discovery system, user interface strategies, and recommending. This article focuses specifically on the recommending portion of the project and potential extensions to that work.
  7. Mao, J.; Xu, W.; Yang, Y.; Wang, J.; Yuille, A.L.: Explain images with multimodal recurrent neural networks (2014) 0.01
    0.0121149 = product of:
      0.0726894 = sum of:
        0.0726894 = weight(_text_:ranking in 1557) [ClassicSimilarity], result of:
          0.0726894 = score(doc=1557,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.35857132 = fieldWeight in 1557, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=1557)
      0.16666667 = coord(1/6)
    
    Abstract
    In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel sentence descriptions to explain the content of images. It directly models the probability distribution of generating a word given previous words and the image. Image descriptions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on three benchmark datasets (IAPR TC-12 [8], Flickr 8K [28], and Flickr 30K [13]). Our model outperforms the state-of-the-art generative method. In addition, the m-RNN model can be applied to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval.
  8. Karpathy, A.; Fei-Fei, L.: Deep visual-semantic alignments for generating image descriptions (2015) 0.01
    0.0121149 = product of:
      0.0726894 = sum of:
        0.0726894 = weight(_text_:ranking in 1868) [ClassicSimilarity], result of:
          0.0726894 = score(doc=1868,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.35857132 = fieldWeight in 1868, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=1868)
      0.16666667 = coord(1/6)
    
    Abstract
    We present a model that generates free-form natural language descriptions of image regions. Our model leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between text and visual data. Our approach is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate the effectiveness of our alignment model with ranking experiments on Flickr8K, Flickr30K and COCO datasets, where we substantially improve on the state of the art. We then show that the sentences created by our generative model outperform retrieval baselines on the three aforementioned datasets and a new dataset of region-level annotations.
  9. Manguinhas, H.; Freire, N.; Machado, J.; Borbinha, J.: Supporting multilingual bibliographic resource discovery with Functional Requirements for Bibliographic Records (2012) 0.01
    0.010095751 = product of:
      0.0605745 = sum of:
        0.0605745 = weight(_text_:ranking in 133) [ClassicSimilarity], result of:
          0.0605745 = score(doc=133,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.29880944 = fieldWeight in 133, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=133)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper describes an experiment exploring the hypothesis that innovative application of the Functional Require-ments for Bibliographic Records (FRBR) principles can complement traditional bibliographic resource discovery systems in order to improve the user experience. A specialized service was implemented that, when given a plain list of results from a regular online catalogue, was able to process, enrich and present that list in a more relevant way for the user. This service pre-processes the records of a traditional online catalogue in order to build a semantic structure following the FRBR model. The service also explores web search features that have been revolutionizing the way users conceptualize resource discovery, such as relevance ranking and metasearching. This work was developed in the context of the TELPlus project. We processed nearly one hundred thousand bibliographic and authority records, in multiple languages, and originating from twelve European na-tional libraries. This paper describes the architecture of the service and the main challenges faced, especially concerning the extraction and linking of the relevant FRBR entities from the bibliographic metadata produced by the libraries. The service was evaluated by end users, who filled out a questionnaire after using a traditional online catalogue and the new service, both with the same bibliographic collection. The analysis of the results supports the hypothesis that FRBR can be implemented for re-source discovery in a non-intrusive way, reusing the data of any existing traditional bibliographic system.
  10. Rajasurya, S.; Muralidharan, T.; Devi, S.; Swamynathan, S.: Semantic information retrieval using ontology in university domain (2012) 0.01
    0.010095751 = product of:
      0.0605745 = sum of:
        0.0605745 = weight(_text_:ranking in 2861) [ClassicSimilarity], result of:
          0.0605745 = score(doc=2861,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.29880944 = fieldWeight in 2861, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2861)
      0.16666667 = coord(1/6)
    
    Abstract
    Today's conventional search engines hardly do provide the essential content relevant to the user's search query. This is because the context and semantics of the request made by the user is not analyzed to the full extent. So here the need for a semantic web search arises. SWS is upcoming in the area of web search which combines Natural Language Processing and Artificial Intelligence. The objective of the work done here is to design, develop and implement a semantic search engine- SIEU(Semantic Information Extraction in University Domain) confined to the university domain. SIEU uses ontology as a knowledge base for the information retrieval process. It is not just a mere keyword search. It is one layer above what Google or any other search engines retrieve by analyzing just the keywords. Here the query is analyzed both syntactically and semantically. The developed system retrieves the web results more relevant to the user query through keyword expansion. The results obtained here will be accurate enough to satisfy the request made by the user. The level of accuracy will be enhanced since the query is analyzed semantically. The system will be of great use to the developers and researchers who work on web. The Google results are re-ranked and optimized for providing the relevant links. For ranking an algorithm has been applied which fetches more apt results for the user query.
  11. Seeliger, F.: ¬A tool for systematic visualization of controlled descriptors and their relation to others as a rich context for a discovery system (2015) 0.01
    0.008076601 = product of:
      0.0484596 = sum of:
        0.0484596 = weight(_text_:ranking in 2547) [ClassicSimilarity], result of:
          0.0484596 = score(doc=2547,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.23904754 = fieldWeight in 2547, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03125 = fieldNorm(doc=2547)
      0.16666667 = coord(1/6)
    
    Abstract
    The discovery service (a search engine and service called WILBERT) used at our library at the Technical University of Applied Sciences Wildau (TUAS Wildau) is comprised of more than 8 million items. If we were to record all licensed publications in this tool to a higher level of articles, including their bibliographic records and full texts, we would have a holding estimated at a hundred million documents. A lot of features, such as ranking, autocompletion, multi-faceted classification, refining opportunities reduce the number of hits. However, it is not enough to give intuitive support for a systematic overview of topics related to documents in the library. John Naisbitt once said: "We are drowning in information, but starving for knowledge." This quote is still very true today. Two years ago, we started to develop micro thesauri for MINT topics in order to develop an advanced indexing of the library stock. We use iQvoc as a vocabulary management system to create the thesaurus. It provides an easy-to-use browser interface that builds a SKOS thesaurus in the background. The purpose of this is to integrate the thesauri in WILBERT in order to offer a better subject-related search. This approach especially supports first-year students by giving them the possibility to browse through a hierarchical alignment of a subject, for instance, logistics or computer science, and thereby discover how the terms are related. It also supports the students with an insight into established abbreviations and alternative labels. Students at the TUAS Wildau were involved in the developmental process of the software regarding the interface and functionality of iQvoc. The first steps have been taken and involve the inclusion of 3000 terms in our discovery tool WILBERT.
  12. Dolin, R.; Agrawal, D.; El Abbadi, A.; Pearlman, J.: Using automated classification for summarizing and selecting heterogeneous information sources (1998) 0.01
    0.00605745 = product of:
      0.0363447 = sum of:
        0.0363447 = weight(_text_:ranking in 1253) [ClassicSimilarity], result of:
          0.0363447 = score(doc=1253,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.17928566 = fieldWeight in 1253, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1253)
      0.16666667 = coord(1/6)
    
    Abstract
    We are currently experimenting with newsgroups as collections. We have built an initial prototype which automatically classifies and summarizes newsgroups within the LCC. (The prototype can be tested below, and more details may be found at http://pharos.alexandria.ucsb.edu/). The prototype uses electronic library catalog records as a `training set' and Latent Semantic Indexing (LSI) for IR. We use the training set to build a rich set of classification terminology, and associate these terms with the relevant categories in the LCC. This association between terms and classification categories allows us to relate users' queries to nodes in the LCC so that users can select appropriate query categories. Newsgroups are similarly associated with classification categories. Pharos then matches the categories selected by users to relevant newsgroups. In principle, this approach allows users to exclude newsgroups that might have been selected based on an unintended meaning of a query term, and to include newsgroups with relevant content even though the exact query terms may not have been used. This work is extensible to other types of classification, including geographical, temporal, and image feature. Before discussing the methodology of the collection summarization and selection, we first present an online demonstration below. The demonstration is not intended to be a complete end-user interface. Rather, it is intended merely to offer a view of the process to suggest the "look and feel" of the prototype. The demo works as follows. First supply it with a few keywords of interest. The system will then use those terms to try to return to you the most relevant subject categories within the LCC. Assuming that the system recognizes any of your terms (it has over 400,000 terms indexed), it will give you a list of 15 LCC categories sorted by relevancy ranking. From there, you have two choices. The first choice, by clicking on the "News" links, is to get a list of newsgroups which the system has identified as relevant to the LCC category you select. The other choice, by clicking on the LCC ID links, is to enter the LCC hierarchy starting at the category of your choice and navigate the tree until you locate the best category for your query. From there, again, you can get a list of newsgroups by clicking on the "News" links. After having shown this demonstration to many people, we would like to suggest that you first give it easier examples before trying to break it. For example, "prostate cancer" (discussed below), "remote sensing", "investment banking", and "gershwin" all work reasonably well.
  13. Van der Veer Martens, B.: Do citation systems represent theories of truth? (2001) 0.00
    0.0039894576 = product of:
      0.023936745 = sum of:
        0.023936745 = product of:
          0.07181023 = sum of:
            0.07181023 = weight(_text_:22 in 3925) [ClassicSimilarity], result of:
              0.07181023 = score(doc=3925,freq=4.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.54716086 = fieldWeight in 3925, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3925)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    22. 7.2006 15:22:28
  14. Dunning, A.: Do we still need search engines? (1999) 0.00
    0.0039493614 = product of:
      0.023696167 = sum of:
        0.023696167 = product of:
          0.0710885 = sum of:
            0.0710885 = weight(_text_:22 in 6021) [ClassicSimilarity], result of:
              0.0710885 = score(doc=6021,freq=2.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.5416616 = fieldWeight in 6021, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6021)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Source
    Ariadne. 1999, no.22
  15. Rudner, L.: Who is going to mine digital library resources? : anf how? (2000) 0.00
    0.0034158952 = product of:
      0.02049537 = sum of:
        0.02049537 = product of:
          0.06148611 = sum of:
            0.06148611 = weight(_text_:29 in 6800) [ClassicSimilarity], result of:
              0.06148611 = score(doc=6800,freq=2.0), product of:
                0.13183585 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03747799 = queryNorm
                0.46638384 = fieldWeight in 6800, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6800)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    26.12.2011 16:38:29
  16. Qin, J.; Paling, S.: Converting a controlled vocabulary into an ontology : the case of GEM (2001) 0.00
    0.0033851666 = product of:
      0.020311 = sum of:
        0.020311 = product of:
          0.060932998 = sum of:
            0.060932998 = weight(_text_:22 in 3895) [ClassicSimilarity], result of:
              0.060932998 = score(doc=3895,freq=2.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.46428138 = fieldWeight in 3895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3895)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    24. 8.2005 19:20:22
  17. Griffiths, T.L.; Steyvers, M.: ¬A probabilistic approach to semantic representation (2002) 0.00
    0.0032205367 = product of:
      0.01932322 = sum of:
        0.01932322 = product of:
          0.05796966 = sum of:
            0.05796966 = weight(_text_:29 in 3671) [ClassicSimilarity], result of:
              0.05796966 = score(doc=3671,freq=4.0), product of:
                0.13183585 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03747799 = queryNorm
                0.43971092 = fieldWeight in 3671, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3671)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    29. 6.2015 14:55:01
    29. 6.2015 16:09:05
  18. Guidi, F.; Sacerdoti Coen, C.: ¬A survey on retrieval of mathematical knowledge (2015) 0.00
    0.0028209724 = product of:
      0.016925834 = sum of:
        0.016925834 = product of:
          0.050777502 = sum of:
            0.050777502 = weight(_text_:22 in 5865) [ClassicSimilarity], result of:
              0.050777502 = score(doc=5865,freq=2.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.38690117 = fieldWeight in 5865, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5865)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    22. 2.2017 12:51:57
  19. Sojka, P.; Liska, M.: ¬The art of mathematics retrieval (2011) 0.00
    0.0027926206 = product of:
      0.016755722 = sum of:
        0.016755722 = product of:
          0.050267164 = sum of:
            0.050267164 = weight(_text_:22 in 3450) [ClassicSimilarity], result of:
              0.050267164 = score(doc=3450,freq=4.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.38301262 = fieldWeight in 3450, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3450)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Content
    Vgl.: DocEng2011, September 19-22, 2011, Mountain View, California, USA Copyright 2011 ACM 978-1-4503-0863-2/11/09
    Date
    22. 2.2017 13:00:42
  20. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.00
    0.0023936746 = product of:
      0.014362047 = sum of:
        0.014362047 = product of:
          0.04308614 = sum of:
            0.04308614 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
              0.04308614 = score(doc=1967,freq=4.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.32829654 = fieldWeight in 1967, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1967)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.

Years