Search (90 results, page 1 of 5)

  • × language_ss:"e"
  • × type_ss:"el"
  • × year_i:[2000 TO 2010}
  1. Dousa, T.: Everything Old is New Again : Perspectivism and Polyhierarchy in Julius O. Kaiser's Theory of Systematic Indexing (2007) 0.15
    0.15238476 = product of:
      0.22857714 = sum of:
        0.17932136 = weight(_text_:systematic in 4835) [ClassicSimilarity], result of:
          0.17932136 = score(doc=4835,freq=8.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.6314765 = fieldWeight in 4835, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4835)
        0.049255773 = product of:
          0.09851155 = sum of:
            0.09851155 = weight(_text_:indexing in 4835) [ClassicSimilarity], result of:
              0.09851155 = score(doc=4835,freq=12.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.51797354 = fieldWeight in 4835, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4835)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In the early years of the 20th century, Julius Otto Kaiser (1868-1927), a special librarian and indexer of technical literature, developed a method of knowledge organization (KO) known as systematic indexing. Certain elements of the method-its stipulation that all indexing terms be divided into fundamental categories "concretes", "countries", and "processes", which are then to be synthesized into indexing "statements" formulated according to strict rules of citation order-have long been recognized as precursors to key principles of the theory of faceted classification. However, other, less well-known elements of the method may prove no less interesting to practitioners of KO. In particular, two aspects of systematic indexing seem to prefigure current trends in KO: (1) a perspectivist outlook that rejects universal classifications in favor of information organization systems customized to reflect local needs and (2) the incorporation of index terms extracted from source documents into a polyhierarchical taxonomical structure. Kaiser's perspectivism anticipates postmodern theories of KO, while his principled use of polyhierarchy to organize terms derived from the language of source documents provides a potentially fruitful model that can inform current discussions about harvesting natural-language terms, such as tags, and incorporating them into a flexibly structured controlled vocabulary.
    Object
    Kaiser systematic indexing
  2. Van der Veer Martens, B.: Do citation systems represent theories of truth? (2001) 0.06
    0.058544468 = product of:
      0.1756334 = sum of:
        0.1756334 = sum of:
          0.08043434 = weight(_text_:indexing in 3925) [ClassicSimilarity], result of:
            0.08043434 = score(doc=3925,freq=2.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.42292362 = fieldWeight in 3925, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.078125 = fieldNorm(doc=3925)
          0.09519906 = weight(_text_:22 in 3925) [ClassicSimilarity], result of:
            0.09519906 = score(doc=3925,freq=4.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.54716086 = fieldWeight in 3925, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=3925)
      0.33333334 = coord(1/3)
    
    Date
    22. 7.2006 15:22:28
    Theme
    Citation indexing
  3. Wyllie, J.; Eaton, S.: Faceted classification as an intelligence analysis tool (2007) 0.04
    0.041841652 = product of:
      0.12552495 = sum of:
        0.12552495 = weight(_text_:systematic in 716) [ClassicSimilarity], result of:
          0.12552495 = score(doc=716,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.44203353 = fieldWeight in 716, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0546875 = fieldNorm(doc=716)
      0.33333334 = coord(1/3)
    
    Abstract
    Jan and Simon are collaborating in the development of a collaborative web-based resource to be called The Energy Centre (TEC). TEC will allow the collaborative collection of clips relating to all aspects of the energy sector. The clips will be stored and organized in such a way that they are not only easily searchable, but can serve as the basis for content analysis - defined as 'a technique for systematic inference from communications'. Jan began by explaining that it was while working as an intelligence analyst at the Canadian Trend Report in Montreal, that he learned about content analysis, a classic taxonomy-based intelligence research methodology
  4. Harzing, A.-W.: Comparing the Google Scholar h-index with the ISI Journal Impact Factor (2008) 0.04
    0.041841652 = product of:
      0.12552495 = sum of:
        0.12552495 = weight(_text_:systematic in 855) [ClassicSimilarity], result of:
          0.12552495 = score(doc=855,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.44203353 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0546875 = fieldNorm(doc=855)
      0.33333334 = coord(1/3)
    
    Abstract
    Publication in academic journals is a key criterion for appointment, tenure and promotion in universities. Many universities weigh publications according to the quality or impact of the journal. Traditionally, journal quality has been assessed through the ISI Journal Impact Factor (JIF). This paper proposes an alternative metric - Hirsch's h-index - and data source - Google Scholar - to assess journal impact. Using a systematic comparison between the Google Scholar h-index and the ISI JIF for a sample of 838 journals in Economics & Business, we argue that the former provides a more accurate and comprehensive measure of journal impact.
  5. Networked knowledge organization systems (2001) 0.04
    0.03586427 = product of:
      0.10759281 = sum of:
        0.10759281 = weight(_text_:systematic in 6473) [ClassicSimilarity], result of:
          0.10759281 = score(doc=6473,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.3788859 = fieldWeight in 6473, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.046875 = fieldNorm(doc=6473)
      0.33333334 = coord(1/3)
    
    Abstract
    Knowledge Organization Systems can comprise thesauri and other controlled lists of keywords, ontologies, classification systems, clustering approaches, taxonomies, gazetteers, dictionaries, lexical databases, concept maps/spaces, semantic road maps, etc. These schemas enable knowledge structuring and management, knowledge-based data processing and systematic access to knowledge structures in individual collections and digital libraries. Used as interactive information services on the Internet they have an increased potential to support the description, discovery and retrieval of heterogeneous information resources and to contribute to an overall resource discovery infrastructure
  6. Bradford, R.B.: Relationship discovery in large text collections using Latent Semantic Indexing (2006) 0.03
    0.027550971 = product of:
      0.08265291 = sum of:
        0.08265291 = sum of:
          0.055726547 = weight(_text_:indexing in 1163) [ClassicSimilarity], result of:
            0.055726547 = score(doc=1163,freq=6.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.2930101 = fieldWeight in 1163, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.03125 = fieldNorm(doc=1163)
          0.026926363 = weight(_text_:22 in 1163) [ClassicSimilarity], result of:
            0.026926363 = score(doc=1163,freq=2.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.15476047 = fieldWeight in 1163, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1163)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper addresses the problem of information discovery in large collections of text. For users, one of the key problems in working with such collections is determining where to focus their attention. In selecting documents for examination, users must be able to formulate reasonably precise queries. Queries that are too broad will greatly reduce the efficiency of information discovery efforts by overwhelming the users with peripheral information. In order to formulate efficient queries, a mechanism is needed to automatically alert users regarding potentially interesting information contained within the collection. This paper presents the results of an experiment designed to test one approach to generation of such alerts. The technique of latent semantic indexing (LSI) is used to identify relationships among entities of interest. Entity extraction software is used to pre-process the text of the collection so that the LSI space contains representation vectors for named entities in addition to those for individual terms. In the LSI space, the cosine of the angle between the representation vectors for two entities captures important information regarding the degree of association of those two entities. For appropriate choices of entities, determining the entity pairs with the highest mutual cosine values yields valuable information regarding the contents of the text collection. The test database used for the experiment consists of 150,000 news articles. The proposed approach for alert generation is tested using a counterterrorism analysis example. The approach is shown to have significant potential for aiding users in rapidly focusing on information of potential importance in large text collections. The approach also has value in identifying possible use of aliases.
    Object
    Latent Semantic Indexing
    Source
    Proceedings of the Fourth Workshop on Link Analysis, Counterterrorism, and Security, SIAM Data Mining Conference, Bethesda, MD, 20-22 April, 2006. [http://www.siam.org/meetings/sdm06/workproceed/Link%20Analysis/15.pdf]
  7. Danowski, P.: Authority files and Web 2.0 : Wikipedia and the PND. An Example (2007) 0.02
    0.02462504 = product of:
      0.07387512 = sum of:
        0.07387512 = sum of:
          0.04021717 = weight(_text_:indexing in 1291) [ClassicSimilarity], result of:
            0.04021717 = score(doc=1291,freq=2.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.21146181 = fieldWeight in 1291, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1291)
          0.033657953 = weight(_text_:22 in 1291) [ClassicSimilarity], result of:
            0.033657953 = score(doc=1291,freq=2.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.19345059 = fieldWeight in 1291, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1291)
      0.33333334 = coord(1/3)
    
    Abstract
    More and more users index everything on their own in the web 2.0. There are services for links, videos, pictures, books, encyclopaedic articles and scientific articles. All these services are library independent. But must that really be? Can't libraries help with their experience and tools to make user indexing better? On the experience of a project from German language Wikipedia together with the German person authority files (Personen Namen Datei - PND) located at German National Library (Deutsche Nationalbibliothek) I would like to show what is possible. How users can and will use the authority files, if we let them. We will take a look how the project worked and what we can learn for future projects. Conclusions - Authority files can have a role in the web 2.0 - there must be an open interface/ service for retrieval - everything that is indexed on the net with authority files can be easy integrated in a federated search - O'Reilly: You have to found ways that your data get more important that more it will be used
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  8. McIlwaine, I.: Section on Classification and Indexing : review of activities, 2000-2001 (2001) 0.02
    0.02144916 = product of:
      0.064347476 = sum of:
        0.064347476 = product of:
          0.12869495 = sum of:
            0.12869495 = weight(_text_:indexing in 6905) [ClassicSimilarity], result of:
              0.12869495 = score(doc=6905,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.6766778 = fieldWeight in 6905, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.125 = fieldNorm(doc=6905)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  9. Veltman, K.H.: From Recorded World to Recording Worlds (2007) 0.02
    0.020920826 = product of:
      0.06276248 = sum of:
        0.06276248 = weight(_text_:systematic in 512) [ClassicSimilarity], result of:
          0.06276248 = score(doc=512,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.22101676 = fieldWeight in 512, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.02734375 = fieldNorm(doc=512)
      0.33333334 = coord(1/3)
    
    Abstract
    The range, depths and limits of what we know depend on the media with which we attempt to record our knowledge. This essay begins with a brief review of developments in a) media: stone, manuscripts, books and digital media, to trace how collections of recorded knowledge expanded to 235,000 in 1837 and have expanded to over 100 million unique titles in a single database including over 1 billion individual listings in 2007. The advent of digital media has brought full text scanning and electronic networks, which enable us to consult digital books and images from our office, home or potentially even with our cell phones. These magnificent developments raise a number of concerns and new challenges. An historical survey of major projects that changed the world reveals that they have taken from one to eight centuries. This helps explain why commercial offerings, which offer useful, and even profitable short-term solutions often undermine a long-term vision. New technologies have the potential to transform our approach to knowledge, but require a vision of a systematic new approach to knowledge. This paper outlines four ingredients for such a vision in the European context. First, the scope of European observatories should be expanded to inform memory institutions of latest technological developments. Second, the quest for a European Digital Library should be expanded to include a distributed repository, a digital reference room and a virtual agora, whereby memory institutions will be linked with current research;. Third, there is need for an institute on Knowledge Organization that takes up anew Otlet's vision, and the pioneering efforts of the Mundaneum (Brussels) and the Bridge (Berlin). Fourth, we need to explore requirements for a Universal Digital Library, which works with countries around the world rather than simply imposing on them an external system. Here, the efforts of the proposed European University of Culture could be useful. Ultimately we need new systems, which open research into multiple ways of knowing, multiple "knowledges". In the past, we went to libraries to study the recorded world. In a world where cameras and sensors are omnipresent we have new recording worlds. In future, we may also use these recording worlds to study the riches of libraries.
  10. Smith, A.G.: Web links as analogues of citations (2004) 0.02
    0.018768014 = product of:
      0.05630404 = sum of:
        0.05630404 = product of:
          0.11260808 = sum of:
            0.11260808 = weight(_text_:indexing in 4205) [ClassicSimilarity], result of:
              0.11260808 = score(doc=4205,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.5920931 = fieldWeight in 4205, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4205)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Theme
    Citation indexing
  11. Turner, J.M.; Mathieu, S.: Audio description text for indexing films (2007) 0.02
    0.018768014 = product of:
      0.05630404 = sum of:
        0.05630404 = product of:
          0.11260808 = sum of:
            0.11260808 = weight(_text_:indexing in 701) [ClassicSimilarity], result of:
              0.11260808 = score(doc=701,freq=8.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.5920931 = fieldWeight in 701, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=701)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Access to audiovisual materials should be as open and free as access to print-based materials. However, we have not yet achieved such a reality. Methods useful for organising print-based materials do not necessarily work well when applied to audiovisual and multimedia materials. In this project, we studied using audio description text and written descriptions to generate keywords for indexing moving images. We found that such sources are fruitful and helpful. In the second part of the study, we looked at the possibility of automatically translating keywords from audio description text into other languages to use them as indexing. Here again, the results are encouraging.
    Content
    Vortrag anlässlich: WORLD LIBRARY AND INFORMATION CONGRESS: 73RD IFLA GENERAL CONFERENCE AND COUNCIL 19-23 August 2007, Durban, South Africa. - 157 - Classification and Indexing
  12. Thaller, M.: From the digitized to the digital library (2001) 0.02
    0.017932136 = product of:
      0.053796407 = sum of:
        0.053796407 = weight(_text_:systematic in 1159) [ClassicSimilarity], result of:
          0.053796407 = score(doc=1159,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.18944295 = fieldWeight in 1159, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1159)
      0.33333334 = coord(1/3)
    
    Abstract
    The author holds a chair in Humanities Computer Science at the University of Cologne. For a number of years, he has been responsible for digitization projects, either as project director or as the person responsible for the technology being employed on the projects. The "Duderstadt project" (http://www.archive.geschichte.mpg.de/duderstadt/dud-e.htm) is one such project. It is one of the early large-scale manuscript servers, finished at the end of 1998, with approximately 80,000 high resolution documents representing the holdings of a city archive before the year 1600. The digital library of the Max-Planck-Institut für Europäische Rechtsgeschichte in Frankfurt (http://www.mpier.uni-frankfurt.de/dlib) is another project on which the author has worked, with currently approximately 900,000 pages. The author is currently project director of the project "Codices Electronici Ecclesiae Colonensis" (CEEC), which has just started and will ultimately consist of approximately 130,000 very high resolution color pages representing the complete holdings of the manuscript library of a medieval cathedral. It is being designed in close cooperation with the user community of such material. The project site (http://www.ceec.uni-koeln.de), while not yet officially opened, currently holds about 5,000 pages and is growing by 100 - 150 pages per day. Parallel to the CEEC model project, a conceptual project, the "Codex Electronicus Colonensis" (CEC), is at work on the definition of an abstract model for the representation of medieval codices in digital form. The following paper has grown out of the design considerations for the mentioned CEC project. The paper reflects a growing concern of the author's that some of the recent advances in digital (research) libraries are being diluted because it is not clear whether the advances really reach the audience for whom the projects would be most useful. Many, if not most, digitization projects have aimed at existing collections as individual servers. A digital library, however, should be more than a digitized one. It should be built according to principles that are not necessarily the same as those employed for paper collections, and it should be evaluated according to different measures which are not yet totally clear. The paper takes the form of six theses on various aspects of the ongoing transition to digital libraries. These theses have been presented at a forum on the German "retrodigitization" program. The program aims at the systematic conversion of library resources into digital form, concentrates for a number of reasons on material primarily of interest to the Humanities, and is funded by the German research council. As such this program is directly aimed at improving the overall infrastructure of academic research; other users of libraries are of interest, but are not central to the program.
  13. Lavoie, B.; Henry, G.; Dempsey, L.: ¬A service framework for libraries (2006) 0.02
    0.017932136 = product of:
      0.053796407 = sum of:
        0.053796407 = weight(_text_:systematic in 1175) [ClassicSimilarity], result of:
          0.053796407 = score(doc=1175,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.18944295 = fieldWeight in 1175, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1175)
      0.33333334 = coord(1/3)
    
    Abstract
    Libraries have not been idle in the face of the changes re-shaping their environments: in fact, much work is underway and major advances have already been achieved. But these efforts lack a unifying framework, a means for libraries, as a community, to gather the strands of individual projects and weave them into a cohesive whole. A framework of this kind would help in articulating collective expectations, assessing progress, and identifying critical gaps. As the information landscape continually shifts and changes, a framework would promote the design and implementation of flexible, interoperable library systems that can respond more quickly to the needs of libraries in serving their constituents. It will provide a port of entry for organizations outside the library domain, and help them understand the critical points of contact between their services and those of libraries. Perhaps most importantly, a framework would assist libraries in strategic planning. It would provide a tool to help them establish priorities, guide investment, and anticipate future needs in uncertain environments. It was in this context, and in recognition of efforts already underway to align library services with emerging information environments, that the Digital Library Federation (DLF) in 2005 sponsored the formation of the Service Framework Group (SFG) [1] to consider a more systematic, community-based approach to aligning the functions of libraries with increasing automation in fulfilling the needs of information environments. The SFG seeks to understand and model the research library in today's environment, by developing a framework within which the services offered by libraries, represented both as business logic and computer processes, can be understood in relation to other parts of the institutional and external information landscape. This framework will help research institutions plan wisely for providing the services needed to meet the current and emerging information needs of their constituents. A service framework is a tool for documenting a shared view of library services in changing environments; communicating it among libraries and others, and applying it to best advantage in meeting library goals. It is a means of focusing attention and organizing discussion. It is not, however, a substitute for innovation and creativity. It does not supply the answers, but facilitates the process by which answers are sought, found, and applied. This paper discusses the SFG's vision of a service framework for libraries, its approach to developing the framework, and the group's work agenda going forward.
  14. Mitchell, J.S.: DDC 22 : an introduction (2003) 0.02
    0.017561011 = product of:
      0.05268303 = sum of:
        0.05268303 = product of:
          0.10536606 = sum of:
            0.10536606 = weight(_text_:22 in 1936) [ClassicSimilarity], result of:
              0.10536606 = score(doc=1936,freq=10.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.6055961 = fieldWeight in 1936, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1936)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Dewey Decimal Classification and Relative Index, Edition 22 (DDC 22) will be issued simultaneously in print and web versions in July 2003. The new edition is the first full print update to the Dewey Decimal Classification system in seven years-it includes several significant updates and many new numbers and topics. DDC 22 also features some fundamental structural changes that have been introduced with the goals of promoting classifier efficiency and improving the DDC for use in a variety of applications in the web environment. Most importantly, the content of the new edition has been shaped by the needs and recommendations of Dewey users around the world. The worldwide user community has an important role in shaping the future of the DDC.
    Object
    DDC-22
  15. Bourdon, F.; Landry, P.: Best practices for subject access to national bibliographies : interim report by the Working Group on Guidelines for Subject Access by National Bibliographic Agencies (2007) 0.02
    0.016253578 = product of:
      0.04876073 = sum of:
        0.04876073 = product of:
          0.09752146 = sum of:
            0.09752146 = weight(_text_:indexing in 698) [ClassicSimilarity], result of:
              0.09752146 = score(doc=698,freq=6.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.5127677 = fieldWeight in 698, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=698)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The working group to establish guidelines for subject access by national bibliographic agencies was set up in 2005 in order to analyse the question of subject access and propose key elements for an indexing policy for national bibliographies. The group's mandate is to put forward recommendations based on best practices for subject access to national bibliographies. The group is presently assessing the elements which should be included in an indexing policy and will present an initial version of its recommendations in 2008.
    Content
    Vortrag anlässlich: WORLD LIBRARY AND INFORMATION CONGRESS: 73RD IFLA GENERAL CONFERENCE AND COUNCIL 19-23 August 2007, Durban, South Africa. - 89 - Bibliography with National Libraries and Classification and Indexing
  16. McIlwaine, I.C.: Section on Classification and Indexing : review of activities 1999-2000 (2000) 0.02
    0.016086869 = product of:
      0.048260607 = sum of:
        0.048260607 = product of:
          0.09652121 = sum of:
            0.09652121 = weight(_text_:indexing in 5409) [ClassicSimilarity], result of:
              0.09652121 = score(doc=5409,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.5075084 = fieldWeight in 5409, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5409)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  17. Slavic, A.: UDC implementation : from library shelves to a structured indexing language (2003) 0.02
    0.016086869 = product of:
      0.048260607 = sum of:
        0.048260607 = product of:
          0.09652121 = sum of:
            0.09652121 = weight(_text_:indexing in 1934) [ClassicSimilarity], result of:
              0.09652121 = score(doc=1934,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.5075084 = fieldWeight in 1934, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1934)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  18. Yakel, E.: Seeking information, seeking connections, seeking meaning : genealogists and family historians (2004) 0.02
    0.016086869 = product of:
      0.048260607 = sum of:
        0.048260607 = product of:
          0.09652121 = sum of:
            0.09652121 = weight(_text_:indexing in 4206) [ClassicSimilarity], result of:
              0.09652121 = score(doc=4206,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.5075084 = fieldWeight in 4206, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4206)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Theme
    Citation indexing
  19. Tennis, J.T.: Social tagging and the next steps for indexing (2006) 0.02
    0.016086869 = product of:
      0.048260607 = sum of:
        0.048260607 = product of:
          0.09652121 = sum of:
            0.09652121 = weight(_text_:indexing in 570) [ClassicSimilarity], result of:
              0.09652121 = score(doc=570,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.5075084 = fieldWeight in 570, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.09375 = fieldNorm(doc=570)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  20. Dextre Clarke, S.G.: Challenges and opportunities for KOS standards (2007) 0.02
    0.015707046 = product of:
      0.047121134 = sum of:
        0.047121134 = product of:
          0.09424227 = sum of:
            0.09424227 = weight(_text_:22 in 4643) [ClassicSimilarity], result of:
              0.09424227 = score(doc=4643,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.5416616 = fieldWeight in 4643, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4643)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 9.2007 15:41:14

Types

  • a 30
  • p 2
  • s 2
  • i 1
  • n 1
  • x 1
  • More… Less…