Search (418 results, page 1 of 21)

  • × language_ss:"e"
  • × type_ss:"a"
  • × type_ss:"el"
  1. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.13
    0.12986135 = product of:
      0.46750084 = sum of:
        0.04639181 = product of:
          0.13917543 = sum of:
            0.13917543 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.13917543 = score(doc=230,freq=2.0), product of:
                0.18572637 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.021906832 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
        0.13917543 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.13917543 = score(doc=230,freq=2.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
        0.13917543 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.13917543 = score(doc=230,freq=2.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
        0.13917543 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.13917543 = score(doc=230,freq=2.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
        0.003582737 = weight(_text_:in in 230) [ClassicSimilarity], result of:
          0.003582737 = score(doc=230,freq=2.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.120230645 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
      0.2777778 = coord(5/18)
    
    Abstract
    In this lecture I intend to challenge those who uphold a monist or even a dualist view of the universe; and I will propose, instead, a pluralist view. I will propose a view of the universe that recognizes at least three different but interacting sub-universes.
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  2. DeSilva, J.M.; Traniello, J.F.A.; Claxton, A.G.; Fannin, L.D.: When and why did human brains decrease in size? : a new change-point analysis and insights from brain evolution in ants (2021) 0.01
    0.009440067 = product of:
      0.0424803 = sum of:
        0.017983863 = weight(_text_:technik in 405) [ClassicSimilarity], result of:
          0.017983863 = score(doc=405,freq=2.0), product of:
            0.109023005 = queryWeight, product of:
              4.976667 = idf(docFreq=828, maxDocs=44218)
              0.021906832 = queryNorm
            0.16495475 = fieldWeight in 405, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.976667 = idf(docFreq=828, maxDocs=44218)
              0.0234375 = fieldNorm(doc=405)
        0.0061568115 = weight(_text_:in in 405) [ClassicSimilarity], result of:
          0.0061568115 = score(doc=405,freq=42.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.20661226 = fieldWeight in 405, product of:
              6.4807405 = tf(freq=42.0), with freq of:
                42.0 = termFreq=42.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0234375 = fieldNorm(doc=405)
        0.015371554 = weight(_text_:der in 405) [ClassicSimilarity], result of:
          0.015371554 = score(doc=405,freq=36.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.31412345 = fieldWeight in 405, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0234375 = fieldNorm(doc=405)
        0.0029680734 = product of:
          0.00890422 = sum of:
            0.00890422 = weight(_text_:22 in 405) [ClassicSimilarity], result of:
              0.00890422 = score(doc=405,freq=2.0), product of:
                0.076713994 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021906832 = queryNorm
                0.116070345 = fieldWeight in 405, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=405)
          0.33333334 = coord(1/3)
      0.22222222 = coord(4/18)
    
    Abstract
    Human brain size nearly quadrupled in the six million years since Homo last shared a common ancestor with chimpanzees, but human brains are thought to have decreased in volume since the end of the last Ice Age. The timing and reason for this decrease is enigmatic. Here we use change-point analysis to estimate the timing of changes in the rate of hominin brain evolution. We find that hominin brains experienced positive rate changes at 2.1 and 1.5 million years ago, coincident with the early evolution of Homo and technological innovations evident in the archeological record. But we also find that human brain size reduction was surprisingly recent, occurring in the last 3,000 years. Our dating does not support hypotheses concerning brain size reduction as a by-product of body size reduction, a result of a shift to an agricultural diet, or a consequence of self-domestication. We suggest our analysis supports the hypothesis that the recent decrease in brain size may instead result from the externalization of knowledge and advantages of group-level decision-making due in part to the advent of social systems of distributed cognition and the storage and sharing of information. Humans live in social groups in which multiple brains contribute to the emergence of collective intelligence. Although difficult to study in the deep history of Homo, the impacts of group size, social organization, collective intelligence and other potential selective forces on brain evolution can be elucidated using ants as models. The remarkable ecological diversity of ants and their species richness encompasses forms convergent in aspects of human sociality, including large group size, agrarian life histories, division of labor, and collective cognition. Ants provide a wide range of social systems to generate and test hypotheses concerning brain size enlargement or reduction and aid in interpreting patterns of brain evolution identified in humans. Although humans and ants represent very different routes in social and cognitive evolution, the insights ants offer can broadly inform us of the selective forces that influence brain size.
    Footnote
    Vgl. auch: Rötzer, F.: Warum schrumpft das Gehirn des Menschen seit ein paar Tausend Jahren? Unter: https://krass-und-konkret.de/wissenschaft-technik/warum-schrumpft-das-gehirn-des-menschen-seit-ein-paar-tausend-jahren/. "... seit einigen tausend Jahren - manche sagen seit 10.000 Jahren -, also nach dem Beginn der Landwirtschaft, der Sesshaftigkeit und der Stadtgründungen sowie der Erfindung der Schrift schrumpfte das menschliche Gehirn überraschenderweise wieder. ... Allgemein wird davon ausgegangen, dass mit den ersten Werkzeugen und vor allem beginnend mit der Erfindung der Schrift kognitive Funktionen, vor allem das Gedächtnis externalisiert wurden, allerdings um den Preis, neue Kapazitäten entwickeln zu müssen, beispielsweise Lesen und Schreiben. Gedächtnis beinhaltet individuelle Erfahrungen, aber auch kollektives Wissen, an dem alle Mitglieder einer Gemeinschaft mitwirken und in das das Wissen sowie die Erfahrungen der Vorfahren eingeschrieben sind. Im digitalen Zeitalter ist die Externalisierung und Entlastung der Gehirne noch sehr viel weitgehender, weil etwa mit KI nicht nur Wissensinhalte, sondern auch kognitive Fähigkeiten wie das Suchen, Sammeln, Analysieren und Auswerten von Informationen zur Entscheidungsfindung externalisiert werden, während die externalisierten Gehirne wie das Internet kollektiv in Echtzeit lernen und sich erweitern. Über Neuimplantate könnten schließlich Menschen direkt an die externalisierten Gehirne angeschlossen werden, aber auch direkt ihre kognitiven Kapazitäten erweitern, indem Prothesen, neue Sensoren oder Maschinen/Roboter auch in der Ferne in den ergänzten Körper der Gehirne aufgenommen werden.
    Die Wissenschaftler sehen diese Entwicklungen im Hintergrund, wollen aber über einen Vergleich mit der Hirnentwicklung bei Ameisen erklären, warum heutige Menschen kleinere Gehirne als ihre Vorfahren vor 100.000 Jahren entwickelt haben. Der Rückgang der Gehirngröße könnte, so die Hypothese, "aus der Externalisierung von Wissen und den Vorteilen der Entscheidungsfindung auf Gruppenebene resultieren, was zum Teil auf das Aufkommen sozialer Systeme der verteilten Kognition und der Speicherung und Weitergabe von Informationen zurückzuführen ist"."
    Source
    Frontiers in ecology and evolution, 22 October 2021 [https://www.frontiersin.org/articles/10.3389/fevo.2021.742639/full]
  3. Stewart, A.: Sociohistorical recommendations for the reclassification of pentecostalism in the Dewey Decimal Classification system (2019) 0.00
    0.0029325304 = product of:
      0.026392773 = sum of:
        0.0046541123 = weight(_text_:in in 5323) [ClassicSimilarity], result of:
          0.0046541123 = score(doc=5323,freq=6.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.1561842 = fieldWeight in 5323, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=5323)
        0.02173866 = weight(_text_:der in 5323) [ClassicSimilarity], result of:
          0.02173866 = score(doc=5323,freq=18.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.44423765 = fieldWeight in 5323, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.046875 = fieldNorm(doc=5323)
      0.11111111 = coord(2/18)
    
    Abstract
    Im vorliegenden Beitrag wird zum einen erläutert, wie die Pfingstbewegung in den neun gedruckten Ausgaben der Dewey-Dezimalklassifikation, die diesen Begriff im Register aufführen, jeweils behandelt wird - beginnend mit der 15. Auflage von 1951 bis zur jüngsten, der 23. Auflage von 2011. Es werden Probleme mit der Charakterisierung der Pfingstbewegung herausgearbeitet - insbesondere, dass sie als ein Amerika-zentriertes und rassisch homogenes Phänomen dargestellt wird. Dies verhindert eine soziohistorisch akkurate Repräsentation der Pfingstbewegung als einer geografisch und rassisch vielfältigen religiösen Tradition innerhalb von Bibliotheksbeständen, die nach der weltweit am meisten verbreiteten Klassifikation organisiert sind. Zum anderen werden Empfehlungen zur Reklassifikation der Pfingstbewegung in der Dewey-Dezimalklassifikation gegeben, die zu einer korrekteren soziohistorischen Repräsentation beitragen und dadurch auch den Zugang zur großen Bandbreite von Literatur über diese globale und diverse religiöse Tradition verbessern würden.
  4. Van der Veer Martens, B.: Do citation systems represent theories of truth? (2001) 0.00
    0.0028965184 = product of:
      0.026068665 = sum of:
        0.0120770335 = weight(_text_:der in 3925) [ClassicSimilarity], result of:
          0.0120770335 = score(doc=3925,freq=2.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.2467987 = fieldWeight in 3925, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.078125 = fieldNorm(doc=3925)
        0.013991632 = product of:
          0.041974895 = sum of:
            0.041974895 = weight(_text_:22 in 3925) [ClassicSimilarity], result of:
              0.041974895 = score(doc=3925,freq=4.0), product of:
                0.076713994 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021906832 = queryNorm
                0.54716086 = fieldWeight in 3925, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3925)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Date
    22. 7.2006 15:22:28
  5. Qi, Q.; Hessen, D.J.; Heijden, P.G.M. van der: Improving information retrieval through correspondenceanalysis instead of latent semantic analysis (2023) 0.00
    0.0026538838 = product of:
      0.015923303 = sum of:
        0.0026870528 = weight(_text_:in in 1045) [ClassicSimilarity], result of:
          0.0026870528 = score(doc=1045,freq=2.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.09017298 = fieldWeight in 1045, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=1045)
        0.00724622 = weight(_text_:der in 1045) [ClassicSimilarity], result of:
          0.00724622 = score(doc=1045,freq=2.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.14807922 = fieldWeight in 1045, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.046875 = fieldNorm(doc=1045)
        0.005990031 = product of:
          0.017970093 = sum of:
            0.017970093 = weight(_text_:29 in 1045) [ClassicSimilarity], result of:
              0.017970093 = score(doc=1045,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.23319192 = fieldWeight in 1045, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1045)
          0.33333334 = coord(1/3)
      0.16666667 = coord(3/18)
    
    Abstract
    The initial dimensions extracted by latent semantic analysis (LSA) of a document-term matrixhave been shown to mainly display marginal effects, which are irrelevant for informationretrieval. To improve the performance of LSA, usually the elements of the raw document-term matrix are weighted and the weighting exponent of singular values can be adjusted.An alternative information retrieval technique that ignores the marginal effects is correspon-dence analysis (CA). In this paper, the information retrieval performance of LSA and CA isempirically compared. Moreover, it is explored whether the two weightings also improve theperformance of CA. The results for four empirical datasets show that CA always performsbetter than LSA. Weighting the elements of the raw data matrix can improve CA; however,it is data dependent and the improvement is small. Adjusting the singular value weightingexponent often improves the performance of CA; however, the extent of the improvementdepends on the dataset and the number of dimensions. (PDF) Improving information retrieval through correspondence analysis instead of latent semantic analysis.
    Date
    15. 9.2023 12:28:29
  6. Linden, E.J. van der; Vliegen, R.; Wijk, J.J. van: Visual Universal Decimal Classification (2007) 0.00
    0.0024847728 = product of:
      0.014908636 = sum of:
        0.003878427 = weight(_text_:in in 548) [ClassicSimilarity], result of:
          0.003878427 = score(doc=548,freq=6.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.1301535 = fieldWeight in 548, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=548)
        0.0060385168 = weight(_text_:der in 548) [ClassicSimilarity], result of:
          0.0060385168 = score(doc=548,freq=2.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.12339935 = fieldWeight in 548, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0390625 = fieldNorm(doc=548)
        0.0049916925 = product of:
          0.0149750775 = sum of:
            0.0149750775 = weight(_text_:29 in 548) [ClassicSimilarity], result of:
              0.0149750775 = score(doc=548,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.19432661 = fieldWeight in 548, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=548)
          0.33333334 = coord(1/3)
      0.16666667 = coord(3/18)
    
    Abstract
    UDC aims to be a consistent and complete classification system, that enables practitioners to classify documents swiftly and smoothly. The eventual goal of UDC is to enable the public at large to retrieve documents from large collections of documents that are classified with UDC. The large size of the UDC Master Reference File, MRF with over 66.000 records, makes it difficult to obtain an overview and to understand its structure. Moreover, finding the right classification in MRF turns out to be difficult in practice. Last but not least, retrieval of documents requires insight and understanding of the coding system. Visualization is an effective means to support the development of UDC as well as its use by practitioners. Moreover, visualization offers possibilities to use the classification without use of the coding system as such. MagnaView has developed an application which demonstrates the use of interactive visualization to face these challenges. In our presentation, we discuss these challenges, and we give a demonstration of the way the application helps face these. Examples of visualizations can be found below.
    Source
    Extensions and corrections to the UDC. 29(2007), S.297-300
  7. Hobert, A.; Jahn, N.; Mayr, P.; Schmidt, B.; Taubert, N.: Open access uptake in Germany 2010-2018 : adoption in a diverse research landscape (2021) 0.00
    0.0021783223 = product of:
      0.0196049 = sum of:
        0.0059412974 = weight(_text_:in in 250) [ClassicSimilarity], result of:
          0.0059412974 = score(doc=250,freq=22.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.19937998 = fieldWeight in 250, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=250)
        0.013663604 = weight(_text_:der in 250) [ClassicSimilarity], result of:
          0.013663604 = score(doc=250,freq=16.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.27922085 = fieldWeight in 250, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.03125 = fieldNorm(doc=250)
      0.11111111 = coord(2/18)
    
    Abstract
    Es handelt sich um eine bibliometrische Untersuchung der Entwicklung der Open-Access-Verfügbarkeit wissenschaftlicher Zeitschriftenartikel in Deutschland, die im Zeitraum 2010-18 erschienen und im Web of Science indexiert sind. Ein besonderes Augenmerk der Analyse lag auf der Frage, ob und inwiefern sich die Open-Access-Profile der Universitäten und außeruniversitären Wissenschaftseinrichtungen in Deutschland voneinander unterscheiden.
    Content
    This study investigates the development of open access (OA) to journal articles from authors affiliated with German universities and non-university research institutions in the period 2010-2018. Beyond determining the overall share of openly available articles, a systematic classification of distinct categories of OA publishing allowed us to identify different patterns of adoption of OA. Taking into account the particularities of the German research landscape, variations in terms of productivity, OA uptake and approaches to OA are examined at the meso-level and possible explanations are discussed. The development of the OA uptake is analysed for the different research sectors in Germany (universities, non-university research institutes of the Helmholtz Association, Fraunhofer Society, Max Planck Society, Leibniz Association, and government research agencies). Combining several data sources (incl. Web of Science, Unpaywall, an authority file of standardised German affiliation information, the ISSN-Gold-OA 3.0 list, and OpenDOAR), the study confirms the growth of the OA share mirroring the international trend reported in related studies. We found that 45% of all considered articles during the observed period were openly available at the time of analysis. Our findings show that subject-specific repositories are the most prevalent type of OA. However, the percentages for publication in fully OA journals and OA via institutional repositories show similarly steep increases. Enabling data-driven decision-making regarding the implementation of OA in Germany at the institutional level, the results of this study furthermore can serve as a baseline to assess the impact recent transformative agreements with major publishers will likely have on scholarly communication.
    Footnote
    Den Aufsatz begleitet ein interaktives Datensupplement, mit dem sich die OA-Anteile auf Ebene der Einrichtung vergleichen lassen. https://subugoe.github.io/oauni/articles/supplement.html. Die Arbeit entstand in Zusammenarbeit der BMBF-Projekte OAUNI und OASE der Förderlinie "Quantitative Wissenschaftsforschung". https://www.wihoforschung.de/de/quantitative-wissenschaftsforschung-1573.php.
  8. Patriarca, S.: Information literacy gives us the tools to check sources and to verify factual statements : What does Popper`s "Es gibt keine Autoritäten" mean? (2021) 0.00
    0.0019085167 = product of:
      0.01717665 = sum of:
        0.0067176316 = weight(_text_:in in 331) [ClassicSimilarity], result of:
          0.0067176316 = score(doc=331,freq=18.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.22543246 = fieldWeight in 331, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=331)
        0.010459018 = weight(_text_:der in 331) [ClassicSimilarity], result of:
          0.010459018 = score(doc=331,freq=6.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.21373394 = fieldWeight in 331, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0390625 = fieldNorm(doc=331)
      0.11111111 = coord(2/18)
    
    Abstract
    I wonder if you would consider an English perspective on the exchange between Bernd Jörs and Hermann Huemer. In my career in the independent education sector I can recall many discussions and Government reports about cross-curricular issues such as logical reasoning and critical thinking, In the IB system this led to the inclusion in the Diploma of "Theory of Knowledge." In the UK we had "key skills" and "critical thinking." One such key skill is what we now call "information literacy." "In his parody of Information literacy, Dr Jörs seems to have confused a necessary condition for a sufficient condition. The fact that information competence may be necessary for serious academic study does not of course make it sufficient. When that is understood the joke about the megalomaniac rather loses its force. (We had better pass over the rant which follows, the sneer at "earth sciences" and the German prejudice towards Austrians)."
    Content
    Zu: Bernd Jörs, Zukunft der Informationswissenschaft und Kritischer Rationalismus - Gegen die Selbstüberschätzung der Vertreter der "Informationskompetenz" eine Rückkehr zu Karl R. Popper geboten, in: Open Password, 30 August - Herbert Huemer, Informationskompetenz als Kompetenz für lebenslanges Lernen, in: Open Password, #965, 25. August 2021 - Huemer nahm auf den Beitrag von Bernd Jörs "Wie sich "Informationskompetenz" methodisch-operativ untersuchen lässt" in Open Password am 20. August 2021 Bezug.
  9. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.00
    0.0017772348 = product of:
      0.015995113 = sum of:
        0.007600134 = weight(_text_:in in 1967) [ClassicSimilarity], result of:
          0.007600134 = score(doc=1967,freq=16.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.25504774 = fieldWeight in 1967, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=1967)
        0.00839498 = product of:
          0.025184939 = sum of:
            0.025184939 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
              0.025184939 = score(doc=1967,freq=4.0), product of:
                0.076713994 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021906832 = queryNorm
                0.32829654 = fieldWeight in 1967, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1967)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
    Source
    Beyond libraries - subject metadata in the digital environment and semantic web. IFLA Satellite Post-Conference, 17-18 August 2012, Tallinn
  10. Hudon, M.: ¬The status of knowledge organization in library and information science master's programs (2021) 0.00
    0.0017616879 = product of:
      0.015855191 = sum of:
        0.008866822 = weight(_text_:in in 697) [ClassicSimilarity], result of:
          0.008866822 = score(doc=697,freq=16.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.29755569 = fieldWeight in 697, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=697)
        0.006988369 = product of:
          0.020965107 = sum of:
            0.020965107 = weight(_text_:29 in 697) [ClassicSimilarity], result of:
              0.020965107 = score(doc=697,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.27205724 = fieldWeight in 697, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=697)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    The content of master's programs accredited by the American Library Association was examined to assess the status of knowledge organization (KO) as a subject in current training. Data collected show that KO remains very visible in a majority of programs, mainly in the form of required and electives courses focusing on descriptive cataloging, classification, and metadata. Observed tendencies include, however, the recent elimination of the required KO course in several programs, the reality that one third of KO electives listed in course catalogs have not been scheduled in the past three years, and the fact that two-thirds of those teaching KO specialize in other areas of information science.
    Date
    27. 9.2022 18:46:29
  11. Priss, U.: Description logic and faceted knowledge representation (1999) 0.00
    0.001603706 = product of:
      0.014433354 = sum of:
        0.008497207 = weight(_text_:in in 2655) [ClassicSimilarity], result of:
          0.008497207 = score(doc=2655,freq=20.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.28515202 = fieldWeight in 2655, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2655)
        0.0059361467 = product of:
          0.01780844 = sum of:
            0.01780844 = weight(_text_:22 in 2655) [ClassicSimilarity], result of:
              0.01780844 = score(doc=2655,freq=2.0), product of:
                0.076713994 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021906832 = queryNorm
                0.23214069 = fieldWeight in 2655, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2655)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    The term "facet" was introduced into the field of library classification systems by Ranganathan in the 1930's [Ranganathan, 1962]. A facet is a viewpoint or aspect. In contrast to traditional classification systems, faceted systems are modular in that a domain is analyzed in terms of baseline facets which are then synthesized. In this paper, the term "facet" is used in a broader meaning. Facets can describe different aspects on the same level of abstraction or the same aspect on different levels of abstraction. The notion of facets is related to database views, multicontexts and conceptual scaling in formal concept analysis [Ganter and Wille, 1999], polymorphism in object-oriented design, aspect-oriented programming, views and contexts in description logic and semantic networks. This paper presents a definition of facets in terms of faceted knowledge representation that incorporates the traditional narrower notion of facets and potentially facilitates translation between different knowledge representation formalisms. A goal of this approach is a modular, machine-aided knowledge base design mechanism. A possible application is faceted thesaurus construction for information retrieval and data mining. Reasoning complexity depends on the size of the modules (facets). A more general analysis of complexity will be left for future research.
    Date
    22. 1.2016 17:30:31
  12. Francu, V.: Does convenience trump accuracy? : the avatars of the UDC in Romania (2007) 0.00
    0.0015553564 = product of:
      0.013998208 = sum of:
        0.0070098387 = weight(_text_:in in 544) [ClassicSimilarity], result of:
          0.0070098387 = score(doc=544,freq=10.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.23523843 = fieldWeight in 544, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=544)
        0.006988369 = product of:
          0.020965107 = sum of:
            0.020965107 = weight(_text_:29 in 544) [ClassicSimilarity], result of:
              0.020965107 = score(doc=544,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.27205724 = fieldWeight in 544, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=544)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    This paper will concentrate on some major issues regarding the potential of UDC and the current controversy about its use UDC in Romania: i) the importance of hierarchical structures in controlled vocabularies with a direct impact on improved information retrieval given by the browsing function which enables visualizing the hierarchies in subject areas rather than just locating a particular topic; ii) the lack of popularity of the UDC as an indexing and information retrieval language among its users be they librarians or end users of library OPACs; and iii) the situation of UDC teachers and teaching in Romanian universities.
    Source
    Extensions and corrections to the UDC. 29(2007), S.263-272
  13. Hajdu Barat, A.: Multilevel education, training, traditions and research in Hungary (2007) 0.00
    0.0015100182 = product of:
      0.0135901645 = sum of:
        0.007600134 = weight(_text_:in in 545) [ClassicSimilarity], result of:
          0.007600134 = score(doc=545,freq=16.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.25504774 = fieldWeight in 545, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=545)
        0.005990031 = product of:
          0.017970093 = sum of:
            0.017970093 = weight(_text_:29 in 545) [ClassicSimilarity], result of:
              0.017970093 = score(doc=545,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.23319192 = fieldWeight in 545, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=545)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    This paper aims to explore the theory and practice of education in schools and the further education as two levels of the Information Society in Hungary . The LIS education is considered the third level over previous levels. I attempt to survey the curriculum and content of different subjects in school; and the division of the programme for librarians. There is a great and long history of UDC usage in Hungary. The lecture sketches stairs of tradition from the beginning to the situation nowadays. Szab ó Ervin began to train the UDC at the Municipal Library in Budapest from 1910. He not only used, but taught the UDC for librarians in his courses, too. As a consequence of Szab ó Ervin's activity the librarians knew and used the UDC very early, and all libraries would use it. The article gives a short overview of recent developments and duties, the situation after the new Hungarian edition, the UDC usage in Hungarian OPAC and the possibility of UDC visualization.
    Source
    Extensions and corrections to the UDC. 29(2007), S.273-284
  14. Rindflesch, T.C.; Aronson, A.R.: Semantic processing in information retrieval (1993) 0.00
    0.0014731288 = product of:
      0.013258159 = sum of:
        0.0062697898 = weight(_text_:in in 4121) [ClassicSimilarity], result of:
          0.0062697898 = score(doc=4121,freq=8.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.21040362 = fieldWeight in 4121, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4121)
        0.006988369 = product of:
          0.020965107 = sum of:
            0.020965107 = weight(_text_:29 in 4121) [ClassicSimilarity], result of:
              0.020965107 = score(doc=4121,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.27205724 = fieldWeight in 4121, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4121)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    Intuition suggests that one way to enhance the information retrieval process would be the use of phrases to characterize the contents of text. A number of researchers, however, have noted that phrases alone do not improve retrieval effectiveness. In this paper we briefly review the use of phrases in information retrieval and then suggest extensions to this paradigm using semantic information. We claim that semantic processing, which can be viewed as expressing relations between the concepts represented by phrases, will in fact enhance retrieval effectiveness. The availability of the UMLS® domain model, which we exploit extensively, significantly contributes to the feasibility of this processing.
    Date
    29. 6.2015 14:51:28
  15. Teal, W.: Alma enumerator : automating repetitive cataloging tasks with Python (2018) 0.00
    0.0014731288 = product of:
      0.013258159 = sum of:
        0.0062697898 = weight(_text_:in in 5348) [ClassicSimilarity], result of:
          0.0062697898 = score(doc=5348,freq=8.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.21040362 = fieldWeight in 5348, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5348)
        0.006988369 = product of:
          0.020965107 = sum of:
            0.020965107 = weight(_text_:29 in 5348) [ClassicSimilarity], result of:
              0.020965107 = score(doc=5348,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.27205724 = fieldWeight in 5348, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5348)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    In June 2016, the Warburg College library migrated to a new integrated library system, Alma. In the process, we lost the enumeration and chronology data for roughly 79,000 print serial item records. Re-entering all this data by hand seemed an unthinkable task. Fortunately, the information was recorded as free text in each item's description field. By using Python, Alma's API and much trial and error, the Wartburg College library was able to parse the serial item descriptions into enumeration and chronology data that was uploaded back into Alma. This paper discusses the design and feasibility considerations addressed in trying to solve this problem, the complications encountered during development, and the highlights and shortcomings of the collection of Python scripts that became Alma Enumerator.
    Date
    10.11.2018 16:29:37
  16. Sales, R. de; Pires, T.B.: ¬The classification of Harris : influences of Bacon and Hegel in the universe of library classification (2017) 0.00
    0.0014727393 = product of:
      0.013254654 = sum of:
        0.006008433 = weight(_text_:in in 3860) [ClassicSimilarity], result of:
          0.006008433 = score(doc=3860,freq=10.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.20163295 = fieldWeight in 3860, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3860)
        0.00724622 = weight(_text_:der in 3860) [ClassicSimilarity], result of:
          0.00724622 = score(doc=3860,freq=2.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.14807922 = fieldWeight in 3860, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.046875 = fieldNorm(doc=3860)
      0.11111111 = coord(2/18)
    
    Abstract
    The studies of library classifications generally interact with a historical approach that contextualizes the research and with the ideas related to classification that are typical of Philosophy. In the 19th century, the North-American philosopher and educator William Torrey Harris developed a book classification at the St. Louis Public School, based on Francis Bacon and Georg Wilhelm Friedrich Hegel. The objective of the present study is to analyze Harris's classification, reflecting upon his theoretical and philosophical backgrounds in order to understand Harris's contribution to Knowledge Organization (KO). To achieve such objective, this study adopts a critical - descriptive approach for the analysis. The results show some influences of Bacon and Hegel in Harris's classification
    Content
    Beitrag bei: NASKO 2017: Visualizing Knowledge Organization: Bringing Focus to Abstract Realities. The sixth North American Symposium on Knowledge Organization (NASKO 2017), June 15-16, 2017, in Champaign, IL, USA.
    Theme
    Geschichte der Klassifikationssysteme
  17. Tay, A.: ¬The next generation discovery citation indexes : a review of the landscape in 2020 (2020) 0.00
    0.0014661439 = product of:
      0.013195295 = sum of:
        0.0062697898 = weight(_text_:in in 40) [ClassicSimilarity], result of:
          0.0062697898 = score(doc=40,freq=8.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.21040362 = fieldWeight in 40, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=40)
        0.0069255047 = product of:
          0.020776514 = sum of:
            0.020776514 = weight(_text_:22 in 40) [ClassicSimilarity], result of:
              0.020776514 = score(doc=40,freq=2.0), product of:
                0.076713994 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021906832 = queryNorm
                0.2708308 = fieldWeight in 40, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=40)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    Conclusion There is a reason why Google Scholar and Web of Science/Scopus are kings of the hills in their various arenas. They have strong brand recogniton, a head start in development and a mass of eyeballs and users that leads to an almost virtious cycle of improvement. Competing against such well established competitors is not easy even when one has deep pockets (Microsoft) or a killer idea (scite). It will be interesting to see how the landscape will look like in 2030. Stay tuned for part II where I review each particular index.
    Date
    17.11.2020 12:22:59
  18. Rozman, D.; Rifl, B.: Universal Decimal Classification in Slovenia (2007) 0.00
    0.001451698 = product of:
      0.013065282 = sum of:
        0.00807359 = weight(_text_:in in 2528) [ClassicSimilarity], result of:
          0.00807359 = score(doc=2528,freq=26.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.27093613 = fieldWeight in 2528, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2528)
        0.0049916925 = product of:
          0.0149750775 = sum of:
            0.0149750775 = weight(_text_:29 in 2528) [ClassicSimilarity], result of:
              0.0149750775 = score(doc=2528,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.19432661 = fieldWeight in 2528, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2528)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    In Slovenia , most libraries use UDC system for cataloguing purposes. Open-access shelving with UDC has also a long tradition in Slovenian public libraries and in some academic libraries. The last printed Slovenian UDC edition dates from 1991. This outdated edition included a very short guide to the use of UDC and about 11000 notations. In the National and University Library, Ljubljana , Slovenia , a team of a coordinator, editors, translators and a computer programmer has been formed to prepare Slovenian translation of UDC version UDC MRF 2001. The on line edition in the format ISO 2709 has kept the original data structure. Searching by UDC numbers, precise searching and full text searching of UDC explanations, notes, examples, etc. have been provided. There are many links in the application which guide the users to UDC numbers. Thus, the appropriate UDC number can be recognized and chosen. Those parts of the application superstructure are especially user friendly and reviewable. Access to the UDC database is controlled.The basics of UDC are explained in the new Slovenian manual »Univerzalna decimalna klasifikacija« published by the National and University Library in Ljubljana in 2006. The authors have created a short, clear and useful manual for beginners as well as for experienced librarians who want to classify and arrange their library holdings in new and innovative ways. In the paper, a description of the characteristics of Slovenian UDC manual is presented and also some proposals for future developments in UDC are expressed.
    Source
    Extensions and corrections to the UDC. 29(2007), S.253-262
  19. Genetasio, G.: ¬The International Cataloguing Principles and their future", in: JLIS.it 3/1 (2012) (2012) 0.00
    0.0014022585 = product of:
      0.012620326 = sum of:
        0.0053741056 = weight(_text_:in in 2625) [ClassicSimilarity], result of:
          0.0053741056 = score(doc=2625,freq=8.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.18034597 = fieldWeight in 2625, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2625)
        0.00724622 = weight(_text_:der in 2625) [ClassicSimilarity], result of:
          0.00724622 = score(doc=2625,freq=2.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.14807922 = fieldWeight in 2625, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.046875 = fieldNorm(doc=2625)
      0.11111111 = coord(2/18)
    
    Abstract
    The article aims to provide an update on the 2009 Statement of International Cataloguing Principles (ICP) and on the status of work on the Statement by the IFLA Cataloguing Section. The article begins with a summary of the drafting process of the ICP by the IME ICC, International Meeting of Experts on an International Cataloguing Code, focusing in particular on the first meeting (IME ICC1) and on the earlier drafts of the 2009 Statement. It then analyzes both the major innovations and the unsatisfactory aspects of the ICP. Finally, it explains and comments on the recent documents by the IFLA Cataloguing Section relating to the ICP, which express their intention to revise the Statement and to verify the convenience of drawing up an international cataloguing code. The latter intention is considered in detail and criticized by the author in the light of the recent publication of the RDA, Resource Description and Access. The article is complemented by an updated bibliography on the ICP.
    Theme
    Geschichte der Kataloge
  20. Assem, M. van; Menken, M.R.; Schreiber, G.; Wielemaker, J.; Wielinga, B.: ¬A method for converting thesauri to RDF/OWL (2004) 0.00
    0.0013797963 = product of:
      0.012418167 = sum of:
        0.005429798 = weight(_text_:in in 4644) [ClassicSimilarity], result of:
          0.005429798 = score(doc=4644,freq=6.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.1822149 = fieldWeight in 4644, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4644)
        0.006988369 = product of:
          0.020965107 = sum of:
            0.020965107 = weight(_text_:29 in 4644) [ClassicSimilarity], result of:
              0.020965107 = score(doc=4644,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.27205724 = fieldWeight in 4644, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4644)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    This paper describes a method for converting existing thesauri and related resources from their native format to RDF(S) and OWL. The method identifies four steps in the conversion process. In each step, decisions have to be taken with respect to the syntax or semantics of the resulting representation. Each step is supported through a number of guidelines. The method is illustrated through conversions of two large thesauri: MeSH and WordNet.
    Date
    29. 7.2011 14:44:56
    Series
    Lecture notes in computer science; no.3298

Years

Types