Search (15612 results, page 781 of 781)

  • × language_ss:"e"
  • × type_ss:"a"
  1. Borko, H.: Research in computer based classification systems (1985) 0.00
    0.001122909 = product of:
      0.005614545 = sum of:
        0.005614545 = weight(_text_:information in 3647) [ClassicSimilarity], result of:
          0.005614545 = score(doc=3647,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.06788416 = fieldWeight in 3647, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3647)
      0.2 = coord(1/5)
    
    Abstract
    The selection in this reader by R. M. Needham and K. Sparck Jones reports an early approach to automatic classification that was taken in England. The following selection reviews various approaches that were being pursued in the United States at about the same time. It then discusses a particular approach initiated in the early 1960s by Harold Borko, at that time Head of the Language Processing and Retrieval Research Staff at the System Development Corporation, Santa Monica, California and, since 1966, a member of the faculty at the Graduate School of Library and Information Science, University of California, Los Angeles. As was described earlier, there are two steps in automatic classification, the first being to identify pairs of terms that are similar by virtue of co-occurring as index terms in the same documents, and the second being to form equivalence classes of intersubstitutable terms. To compute similarities, Borko and his associates used a standard correlation formula; to derive classification categories, where Needham and Sparck Jones used clumping, the Borko team used the statistical technique of factor analysis. The fact that documents can be classified automatically, and in any number of ways, is worthy of passing notice. Worthy of serious attention would be a demonstra tion that a computer-based classification system was effective in the organization and retrieval of documents. One reason for the inclusion of the following selection in the reader is that it addresses the question of evaluation. To evaluate the effectiveness of their automatically derived classification, Borko and his team asked three questions. The first was Is the classification reliable? in other words, could the categories derived from one sample of texts be used to classify other texts? Reliability was assessed by a case-study comparison of the classes derived from three different samples of abstracts. The notso-surprising conclusion reached was that automatically derived classes were reliable only to the extent that the sample from which they were derived was representative of the total document collection. The second evaluation question asked whether the classification was reasonable, in the sense of adequately describing the content of the document collection. The answer was sought by comparing the automatically derived categories with categories in a related classification system that was manually constructed. Here the conclusion was that the automatic method yielded categories that fairly accurately reflected the major area of interest in the sample collection of texts; however, since there were only eleven such categories and they were quite broad, they could not be regarded as suitable for use in a university or any large general library. The third evaluation question asked whether automatic classification was accurate, in the sense of producing results similar to those obtainabie by human cIassifiers. When using human classification as a criterion, automatic classification was found to be 50 percent accurate.
  2. Rogers, Y.: New theoretical approaches for human-computer interaction (2003) 0.00
    0.001122909 = product of:
      0.005614545 = sum of:
        0.005614545 = weight(_text_:information in 4270) [ClassicSimilarity], result of:
          0.005614545 = score(doc=4270,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.06788416 = fieldWeight in 4270, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4270)
      0.2 = coord(1/5)
    
    Source
    Annual review of information science and technology. 38(2004), S.87-144
  3. Keith, S.: Searching for news headlines : connections between unresolved hyperlinking issues and a new battle over copyright online (2007) 0.00
    0.001122909 = product of:
      0.005614545 = sum of:
        0.005614545 = weight(_text_:information in 351) [ClassicSimilarity], result of:
          0.005614545 = score(doc=351,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.06788416 = fieldWeight in 351, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=351)
      0.2 = coord(1/5)
    
    Abstract
    In March 2005, the Paris-based news service Agence France Presse (AFP) sued Google Inc. in an American court, charging that the search engine's news aggregator program had illegally infringed the wire service's copyright. The lawsuit, filed in the u.s. District Court for the District of Columbia, claimed that Google News had engaged in the infringement since its launch in September 2002 by »reproducing and publicly displaying AFP's photographs, headlines, and story leads« . The claim also said that Google News had ignored requests that it cease and desist the infringement, and it asked for more than $17 million (about 13.6 million Euros) in damages. Within a few days, Google News was removing links t0 Agence France Presse news articles and photographs.1 However, Agence France Presse said it would still pursue the lawsuit because 0f the licensing fees it was owed as a result of what it claimed was Google's past copyright infringement. The case, which was still pending in early 2007, as the sides struggled to reconstruct and evaluate specific past Google News pageso, was interesting for several reasons. First, it pitted the company that owns the world's most popular search engine against the world's oldest news service; Agence France Presse was founded in Paris in 1835 by Charles-Louis Havas, sometimes known as the father of global journalismo. Second, the copyright-infringement allegations made by AFP had not been made by most of the 4,500 or so other news organizations whose material is used in exactly the same way on Google News every day, though Google did lose somewhat similar cases in German and Belgian courts in 2004 and 2006, respectively. Third, AFP's assertions and Google's counter claims offer an intriguing argument about the nature of key components of traditional and new-media journalism, especially news headlines. Finally, the case warrants further examination because it is essentially an argument over the fundamental nature of Internet hyperlinking. Some commentators have noted that a ruling against Google could be disastrous for blogs, which also often quote news storieso, while other commentators have concluded that a victory for Agence France Presse would call into question the future of online news aggregatorso. This chapter uses the Agence France Presse lawsuit as a way to examine arguments about the legality of news aggregator links to copyrighted material. Using traditional legal research methods, it attempts to put the case into context by referring to key u.s. and European Internet hyperlinking lawsuits from the 1990s through 2006. The chapter also discusses the nature of specific traditional journalistic forms such as headlines and story leads and whether they can be copyrighted. Finally, the chapter argues that out-of-court settlements and conflicting court rulings have left considerable ambiguity around the intersection of copyright, free speech, and information-cataloging concerns, leaving Google News and other aggregators vulnerable to claims of copyright infringement.
  4. Thomas, C.; McDonald, R.H.; McDowell, C.S.: Overview - Repositories by the numbers (2007) 0.00
    0.001122909 = product of:
      0.005614545 = sum of:
        0.005614545 = weight(_text_:information in 1169) [ClassicSimilarity], result of:
          0.005614545 = score(doc=1169,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.06788416 = fieldWeight in 1169, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1169)
      0.2 = coord(1/5)
    
    Abstract
    Scholarly digital repositories continue to be one of the most dynamic and varying components of the emerging digital research library. Little consensus is evident on matters such as depositing content in disciplinary or institutional repositories, or both. Debates about deposit mandates and access to research have spilled into the political arena and have focused much attention on various aspects of digital repositories, including the economics and patterns of scholarly publishing, systems and technology, governmental and organizational policies, access, accountability, research impact, and the motivations of individual researchers. Scholarly digital repositories are a rich area for both empirical research and philosophical debate, and are the central theme of a growing body of published literature. It is surprising, therefore, that so much is still unknown about the basic nature of digital repositories, including both differences and similarities. As the two Repositories by the Numbers articles in this issue show, digital scholarly repositories are diversifying both in their general nature and in the information they contain. Because there is still much to be discovered or understood at the most basic levels of digital repositories, co-authors Chuck Thomas and Robert H. McDonald and author Cat McDowell offer readers two different but complementary statistical studies of various types of institutional and disciplinary repositories. Re-iterating a theme of many of the recent works presented at the 2nd International Conference on Institutional Repositories, Thomas and McDonald apply statistical techniques to explore patterns of scholarly participation by more than 30,000 authors in several categories of repositories. McDowell reports on her ongoing analysis of the growth and development of institutional repositories in American universities and colleges. Together, these articles reveal new aspects of the digital repository landscape, and present data that will be of immense interest to repository planners and sponsors.
  5. Hammond, T.; Hannay, T.; Lund, B.; Scott, J.: Social bookmarking tools (I) : a general review (2005) 0.00
    0.001122909 = product of:
      0.005614545 = sum of:
        0.005614545 = weight(_text_:information in 1188) [ClassicSimilarity], result of:
          0.005614545 = score(doc=1188,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.06788416 = fieldWeight in 1188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1188)
      0.2 = coord(1/5)
    
    Abstract
    A number of such utilities are presented here, together with an emergent new class of tools that caters more to the academic communities and that stores not only user-supplied tags, but also structured citation metadata terms wherever it is possible to glean this information from service providers. This provision of rich, structured metadata means that the user is provided with an accurate third-party identification of a document, which could be used to retrieve that document, but is also free to search on user-supplied terms so that documents of interest (or rather, references to documents) can be made discoverable and aggregated with other similar descriptions either recorded by the user or by other users. Matt Biddulph in an XML.com article last year, in which he reviews one of the better known social bookmarking tools, del.icio.us, declares that the "del.icio.us-space has three major axes: users, tags, and URLs". We fully support that assessment but choose to present this deconstruction in a reverse order. This paper thus first recaps a brief history of bookmarks, then discusses the current interest in tagging, moves on to look at certain social issues, and finally considers some of the feature sets offered by the new bookmarking tools. A general review of a number of common social bookmarking tools is presented in the annex. A companion paper describes a case study in more detail: the tool that Nature Publishing Group has made available to the scientific community as an experimental entrée into this field - Connotea; our reasons for endeavouring to provide such a utility; and experiences gained and lessons learned.
  6. Zia, L.L.: new projects and a progress report : ¬The NSF National Science, Technology, Engineering, and Mathematics Education Digital Library (NSDL) program (2001) 0.00
    0.001122909 = product of:
      0.005614545 = sum of:
        0.005614545 = weight(_text_:information in 1227) [ClassicSimilarity], result of:
          0.005614545 = score(doc=1227,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.06788416 = fieldWeight in 1227, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1227)
      0.2 = coord(1/5)
    
    Theme
    Information Gateway
  7. Baker, T.: Languages for Dublin Core (1998) 0.00
    0.001122909 = product of:
      0.005614545 = sum of:
        0.005614545 = weight(_text_:information in 1257) [ClassicSimilarity], result of:
          0.005614545 = score(doc=1257,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.06788416 = fieldWeight in 1257, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1257)
      0.2 = coord(1/5)
    
    Abstract
    Over the past three years, the Dublin Core Metadata Initiative has achieved a broad international consensus on the semantics of a simple element set for describing electronic resources. Since the first workshop in March 1995, which was reported in the very first issue of D-Lib Magazine, Dublin Core has been the topic of perhaps a dozen articles here. Originally intended to be simple and intuitive enough for authors to tag Web pages without special training, Dublin Core is being adapted now for more specialized uses, from government information and legal deposit to museum informatics and electronic commerce. To meet such specialized requirements, Dublin Core can be customized with additional elements or qualifiers. However, these refinements can compromise interoperability across applications. There are tradeoffs between using specific terms that precisely meet local needs versus general terms that are understood more widely. We can better understand this inevitable tension between simplicity and complexity if we recognize that metadata is a form of human language. With Dublin Core, as with a natural language, people are inclined to stretch definitions, make general terms more specific, specific terms more general, misunderstand intended meanings, and coin new terms. One goal of this paper, therefore, will be to examine the experience of some related ways to seek semantic interoperability through simplicity: planned languages, interlingua constructs, and pidgins. The problem of semantic interoperability is compounded when we consider Dublin Core in translation. All of the workshops, documents, mailing lists, user guides, and working group outputs of the Dublin Core Initiative have been in English. But in many countries and for many applications, people need a metadata standard in their own language. In principle, the broad elements of Dublin Core can be defined equally well in Bulgarian or Hindi. Since Dublin Core is a controlled standard, however, any parallel definitions need to be kept in sync as the standard evolves. Another goal of the paper, then, will be to define the conceptual and organizational problem of maintaining a metadata standard in multiple languages. In addition to a name and definition, which are meant for human consumption, each Dublin Core element has a label, or indexing token, meant for harvesting by search engines. For practical reasons, these machine-readable tokens are English-looking strings such as Creator and Subject (just as HTML tags are called HEAD, BODY, or TITLE). These tokens, which are shared by Dublin Cores in every language, ensure that metadata fields created in any particular language are indexed together across repositories. As symbols of underlying universal semantics, these tokens form the basis of semantic interoperability among the multiple Dublin Cores. As long as we limit ourselves to sharing these indexing tokens among exact translations of a simple set of fifteen broad elements, the definitions of which fit easily onto two pages, the problem of Dublin Core in multiple languages is straightforward. But nothing having to do with human language is ever so simple. Just as speakers of various languages must learn the language of Dublin Core in their own tongues, we must find the right words to talk about a metadata language that is expressable in many discipline-specific jargons and natural languages and that inevitably will evolve and change over time.
  8. Chen, H.; Baptista Nunes, J.M.; Ragsdell, G.; An, X.: Somatic and cultural knowledge : drivers of a habitus-driven model of tacit knowledge acquisition (2019) 0.00
    0.001122909 = product of:
      0.005614545 = sum of:
        0.005614545 = weight(_text_:information in 5460) [ClassicSimilarity], result of:
          0.005614545 = score(doc=5460,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.06788416 = fieldWeight in 5460, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5460)
      0.2 = coord(1/5)
    
    Abstract
    Findings The findings of this research suggest that individual learning and development are deemed to be the fundamental feature for professional success and survival in the continuously changing environment of the SW industry today. However, individual learning was described by the participants as much more than a mere individual process. It involves a collective and participatory effort within the organization and the sector as a whole, and a KS process that transcends organizational, cultural and national borders. Individuals in particular are mostly motivated by the pressing need to face and adapt to the dynamic and changeable environments of today's digital society that is led by the sector. Software practitioners are continuously in need of learning, refreshing and accumulating tacit knowledge, partly because it is required by their companies, but also due to a sound awareness of continuous technical and technological changes that seem only to increase with the advances of information technology. This led to a clear theoretical understanding that the continuous change that faces the sector has led to individual acquisition of culture and somatic knowledge that in turn lay the foundation for not only the awareness of the need for continuous individual professional development but also for the creation of habitus related to KS and continuous learning. Originality/value The study reported in this paper shows that there is a theoretical link between the existence of conducive organizational and sector-wide somatic and cultural knowledge, and the success of KS practices that lead to individual learning and development. Therefore, the theory proposed suggests that somatic and cultural knowledge are crucial drivers for the creation of habitus of individual tacit knowledge acquisition. The paper further proposes a habitus-driven individual development (HDID) Theoretical Model that can be of use to both academics and practitioners interested in fostering and developing processes of KS and individual development in knowledge-intensive organizations.
  9. Fairthorne, R.A.: Temporal structure in bibliographic classification (1985) 0.00
    9.624934E-4 = product of:
      0.004812467 = sum of:
        0.004812467 = weight(_text_:information in 3651) [ClassicSimilarity], result of:
          0.004812467 = score(doc=3651,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.058186423 = fieldWeight in 3651, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3651)
      0.2 = coord(1/5)
    
    Abstract
    The fan of past documents may be seen across time as a philosophical "wake," translated documents as a sideways relationship and future documents as another fan spreading forward from a given document (p. 365). The "overlap of reading histories can be used to detect common interests among readers," (p. 365) and readers may be classified accordingly. Finally, Fairthorne rejects the notion of a "general" classification, which he regards as a mirage, to be replaced by a citation-type network to identify classes. An interesting feature of his work lies in his linkage between old and new documents via a bibliographic method-citations, authors' names, imprints, style, and vocabulary - rather than topical (subject) terms. This is an indirect method of creating classes. The subject (aboutness) is conceived as a finite, common sharing of knowledge over time (past, present, and future) as opposed to the more common hierarchy of topics in an infinite schema assumed to be universally useful. Fairthorne, a mathematician by training, is a prolific writer an the foundations of classification and information. His professional career includes work with the Royal Engineers Chemical Warfare Section and the Royal Aircraft Establishment (RAE). He was the founder of the Computing Unit which became the RAE Mathematics Department.
  10. Brand, A.: CrossRef turns one (2001) 0.00
    9.624934E-4 = product of:
      0.004812467 = sum of:
        0.004812467 = weight(_text_:information in 1222) [ClassicSimilarity], result of:
          0.004812467 = score(doc=1222,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.058186423 = fieldWeight in 1222, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1222)
      0.2 = coord(1/5)
    
    Abstract
    CrossRef, the only full-blown application of the Digital Object Identifier (DOI®) System to date, is now a little over a year old. What started as a cooperative effort among publishers and technologists to prototype DOI-based linking of citations in e-journals evolved into an independent, non-profit enterprise in early 2000. We have made considerable headway during our first year, but there is still much to be done. When CrossRef went live with its collaborative linking service last June, it had enabled reference links in roughly 1,100 journals from a member base of 33 publishers, using a functional prototype system. The DOI-X prototype was described in an article published in D-Lib Magazine in February of 2000. On the occasion of CrossRef's first birthday as a live service, this article provides a non-technical overview of our progress to date and the major hurdles ahead. The electronic medium enriches the research literature arena for all players -- researchers, librarians, and publishers -- in numerous ways. Information has been made easier to discover, to share, and to sell. To take a simple example, the aggregation of book metadata by electronic booksellers was a huge boon to scholars seeking out obscure backlist titles, or discovering books they would never otherwise have known to exist. It was equally a boon for the publishers of those books, who saw an unprecedented surge in sales of backlist titles with the advent of centralized electronic bookselling. In the serials sphere, even in spite of price increases and the turmoil surrounding site licenses for some prime electronic content, libraries overall are now able to offer more content to more of their patrons. Yet undoubtedly, the key enrichment for academics and others navigating a scholarly corpus is linking, and in particular the linking that takes the reader out of one document and into another in the matter of a click or two. Since references are how authors make explicit the links between their work and precedent scholarship, what could be more fundamental to the reader than making those links immediately actionable? That said, automated linking is only really useful from a research perspective if it works across publications and across publishers. Not only do academics think about their own writings and those of their colleagues in terms of "author, title, rough date" -- the name of the journal itself is usually not high on the list of crucial identifying features -- but they are oblivious as to the identity of the publishers of all but their very favorite books and journals.
  11. Evens, M.W.: Natural language interface for an expert system (2002) 0.00
    9.624934E-4 = product of:
      0.004812467 = sum of:
        0.004812467 = weight(_text_:information in 3719) [ClassicSimilarity], result of:
          0.004812467 = score(doc=3719,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.058186423 = fieldWeight in 3719, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3719)
      0.2 = coord(1/5)
    
    Source
    Encyclopedia of library and information science. Vol.71, [=Suppl.34]
  12. Dahlberg, I.: How to improve ISKO's standing : ten desiderata for knowledge organization (2011) 0.00
    8.020778E-4 = product of:
      0.004010389 = sum of:
        0.004010389 = weight(_text_:information in 4300) [ClassicSimilarity], result of:
          0.004010389 = score(doc=4300,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.048488684 = fieldWeight in 4300, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4300)
      0.2 = coord(1/5)
    
    Content
    6. Establishment of national Knowledge Organization Institutes should be scheduled by national chapters, planned energetically and submitted to corresponding administrative authorities for support. They could be attached to research institutions, e.g., the Max-Planck or Fraunhofer Institutes in Germany or to universities. Their scope and research areas relate to the elaboration of knowledge systems of subject related concepts, according to Desideratum 1, and may be connected to training activities and KOsubject-related research work. 7. ISKO experts should not accept to be impressed by Internet and Computer Science, but should demonstrate their expertise more actively on the public plane. They should tend to take a leading part in the ISKO Secretariats and the KO Institutes, and act as consultants and informants, as well as editors of statistics and other publications. 8. All colleagues trained in the field of classification/indexing and thesauri construction and active in different countries should be identified and approached for membership in ISKO. This would have to be accomplished by the General Secretariat with the collaboration of the experts in the different secretariats of the countries, as soon as they start to work. The more members ISKO will have, the greater will be its reputation and influence. But it will also prove its professionalism by the quality of its products, especially its innovating conceptual order systems to come. 9. ISKO should-especially in view of global expansion-intensify the promotion of knowledge about its own subject area through the publications mentioned here and in further publications as deemed necessary. It should be made clear that, especially in ISKO's own publications, professional subject indexes are a sine qua non. 10. 1) Knowledge Organization, having arisen from librarianship and documentation, the contents of which has many points of contact with numerous application fields, should-although still linked up with its areas of descent-be recognized in the long run as an independent autonomous discipline to be located under the science of science, since only thereby can it fully play its role as an equal partner in all application fields; and, 2) An "at-a-first-glance knowledge order" could be implemented through the Information Coding Classification (ICC), as this system is based on an entirely new approach, namely based on general object areas, thus deviating from discipline-oriented main classes of the current main universal classification systems. It can therefore recoup by simple display on screen the hitherto lost overview of all knowledge areas and fields. On "one look", one perceives 9 object areas subdivided into 9 aspects which break down into 81 subject areas with their 729 subject fields, including further special fields. The synthesis and place of order of all knowledge becomes thus evident at a glance to everybody. Nobody would any longer be irritated by the abundance of singular apparently unrelated knowledge fields or become hesitant in his/her understanding of the world.

Authors

Types

  • el 266
  • b 47
  • p 1
  • s 1
  • x 1
  • More… Less…

Themes

Classifications