Search (31 results, page 1 of 2)

  • × year_i:[2000 TO 2010}
  • × type_ss:"r"
  1. Hildebrand, M.; Ossenbruggen, J. van; Hardman, L.: ¬An analysis of search-based user interaction on the Semantic Web (2007) 0.04
    0.035163544 = product of:
      0.105490625 = sum of:
        0.05872617 = weight(_text_:applications in 59) [ClassicSimilarity], result of:
          0.05872617 = score(doc=59,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.34048924 = fieldWeight in 59, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0546875 = fieldNorm(doc=59)
        0.018148692 = weight(_text_:of in 59) [ClassicSimilarity], result of:
          0.018148692 = score(doc=59,freq=12.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.29624295 = fieldWeight in 59, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=59)
        0.028615767 = weight(_text_:systems in 59) [ClassicSimilarity], result of:
          0.028615767 = score(doc=59,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.23767869 = fieldWeight in 59, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0546875 = fieldNorm(doc=59)
      0.33333334 = coord(3/9)
    
    Abstract
    Many Semantic Web applications provide access to their resources through text-based search queries, using explicit semantics to improve the search results. This paper provides an analysis of the current state of the art in semantic search, based on 35 existing systems. We identify different types of semantic search features that are used during query construction, the core search process, the presentation of the search results and user feedback on query and results. For each of these, we consider the functionality that the system provides and how this is made available through the user interface.
  2. Colomb, R.M.: Quality of ontologies in interoperating information systems (2002) 0.02
    0.019158738 = product of:
      0.08621432 = sum of:
        0.022227516 = weight(_text_:of in 7858) [ClassicSimilarity], result of:
          0.022227516 = score(doc=7858,freq=18.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.36282203 = fieldWeight in 7858, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7858)
        0.0639868 = weight(_text_:systems in 7858) [ClassicSimilarity], result of:
          0.0639868 = score(doc=7858,freq=10.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.5314657 = fieldWeight in 7858, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7858)
      0.22222222 = coord(2/9)
    
    Abstract
    The focus of this paper is an quality of ontologies as they relate to interoperating information systems. Quality is not a property of something but a judgment, so must be relative to some purpose, and generally involves recognition of design tradeoffs. Ontologies used for information systems interoperability have much in common with classification systems in information science, knowledge based systems, and programming languages, and inherit quality characteristics from each of these older areas. Factors peculiar to the new field lead to some additional characteristics relevant to quality, some of which are more profitably considered quality aspects not of the ontology as such, but of the environment through which the ontology is made available to its users. Suggestions are presented as to how to use these Factors in producing quality ontologies.
  3. Hodge, G.: Systems of knowledge organization for digital libraries : beyond traditional authority files (2000) 0.02
    0.016650792 = product of:
      0.07492857 = sum of:
        0.020082738 = weight(_text_:of in 4723) [ClassicSimilarity], result of:
          0.020082738 = score(doc=4723,freq=20.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.32781258 = fieldWeight in 4723, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=4723)
        0.054845825 = weight(_text_:systems in 4723) [ClassicSimilarity], result of:
          0.054845825 = score(doc=4723,freq=10.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.45554203 = fieldWeight in 4723, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=4723)
      0.22222222 = coord(2/9)
    
    Abstract
    Access of digital materials continues to be an issue of great significance in the development of digital libraries. The proliferation of information in the networked digital environment poses challenges as well as opportunities. The author reports on a wide array of activities in the field. While this publication is not intended to be exhaustive, the reader will find, in a single work, an overview of systems of knowledge organization and pertinent examples of their application to digital materials
    Content
    (1) Knowledge organization systems: an overview; (2) Linking digital library resources to related resources; (3) Making resources accessible to other communities; (4) Planning and implementing knowledge organization systems in digital libraries; (5) The future of knowledge organization systems on the Web
  4. Hellweg, H.; Krause, J.; Mandl, T.; Marx, J.; Müller, M.N.O.; Mutschke, P.; Strötgen, R.: Treatment of semantic heterogeneity in information retrieval (2001) 0.01
    0.011030885 = product of:
      0.049638983 = sum of:
        0.016935252 = weight(_text_:of in 6560) [ClassicSimilarity], result of:
          0.016935252 = score(doc=6560,freq=8.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.27643585 = fieldWeight in 6560, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=6560)
        0.03270373 = weight(_text_:systems in 6560) [ClassicSimilarity], result of:
          0.03270373 = score(doc=6560,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.2716328 = fieldWeight in 6560, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0625 = fieldNorm(doc=6560)
      0.22222222 = coord(2/9)
    
    Abstract
    Nowadays, users of information services are faced with highly decentralised, heterogeneous document sources with different content analysis. Semantic heterogeneity occurs e.g. when resources using different systems for content description are searched using a simple query system. This report describes several approaches of handling semantic heterogeneity used in projects of the German Social Science Information Centre
  5. Euzenat, J.; Bach, T.Le; Barrasa, J.; Bouquet, P.; Bo, J.De; Dieng, R.; Ehrig, M.; Hauswirth, M.; Jarrar, M.; Lara, R.; Maynard, D.; Napoli, A.; Stamou, G.; Stuckenschmidt, H.; Shvaiko, P.; Tessaris, S.; Acker, S. Van; Zaihrayeu, I.: State of the art on ontology alignment (2004) 0.01
    0.011030885 = product of:
      0.049638983 = sum of:
        0.016935252 = weight(_text_:of in 172) [ClassicSimilarity], result of:
          0.016935252 = score(doc=172,freq=32.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.27643585 = fieldWeight in 172, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=172)
        0.03270373 = weight(_text_:systems in 172) [ClassicSimilarity], result of:
          0.03270373 = score(doc=172,freq=8.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.2716328 = fieldWeight in 172, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03125 = fieldNorm(doc=172)
      0.22222222 = coord(2/9)
    
    Abstract
    In this document we provide an overall view of the state of the art in ontology alignment. It is organised as a description of the need for ontology alignment, a presentation of the techniques currently in use for ontology alignment and a presentation of existing systems. The state of the art is not restricted to any discipline and consider as some form of ontology alignment the work made on schema matching within the database area for instance. Heterogeneity problems on the semantic web can be solved, for some of them, by aligning heterogeneous ontologies. This is illustrated through a number of use cases of ontology alignment. Aligning ontologies consists of providing the corresponding entities in these ontologies. This process is precisely defined in deliverable D2.2.1. The current deliverable presents the many techniques currently used for implementing this process. These techniques are classified along the many features that can be found in ontologies (labels, structures, instances, semantics). They resort to many different disciplines such as statistics, machine learning or data analysis. The alignment itself is obtained by combining these techniques towards a particular goal (obtaining an alignment with particular features, optimising some criterion). Several combination techniques are also presented. Finally, these techniques have been experimented in various systems for ontology alignment or schema matching. Several such systems are presented briefly in the last section and characterized by the above techniques they rely on. The conclusion is that many techniques are available for achieving ontology alignment and many systems have been developed based on these techniques. However, few comparisons and few integration is actually provided by these implementations. This deliverable serves as a basis for considering further action along these two lines. It provide a first inventory of what should be evaluated and suggests what evaluation criterion can be used.
    Content
    This document is part of a research project funded by the IST Programme of the Commission of the European Communities as project number IST-2004-507482.
  6. Sykes, J.: ¬The value of indexing : a white paper prepared for Factiva, Factiva, a Dow Jones and Reuters Company (2001) 0.01
    0.010716482 = product of:
      0.04822417 = sum of:
        0.03355781 = weight(_text_:applications in 720) [ClassicSimilarity], result of:
          0.03355781 = score(doc=720,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.19456528 = fieldWeight in 720, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03125 = fieldNorm(doc=720)
        0.014666359 = weight(_text_:of in 720) [ClassicSimilarity], result of:
          0.014666359 = score(doc=720,freq=24.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.23940048 = fieldWeight in 720, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=720)
      0.22222222 = coord(2/9)
    
    Abstract
    Finding particular documents after they have been reviewed and stored has been a challenge since the advent of the printed word. "Findability" is emphatically more important as we deal with information overload in general and with the specific need to quickly find relevant background information to support business decisions in a networked environment. Because time is arguably the most valuable asset in today's economy, information users value tools that help them (1) quickly find the information they are seeking and (2) manage the quantity and quality of information they manipulate and work with on a regular basis. Although the term "indexing" may lack the cachet of some other terms we use to describe current information organization and management concepts, indexing is fundamental to precise information organization and retrieval, especially when dealing with large sets of documents. Power users find great value in using a known, granular indexing language that can surface the most relevant items and filter out items of peripheral or no interest. Web architects and interface designers can likewise take advantage of indexing labels to present only the information meeting certain requirements for users who do not wish to learn the indexing structure or taxonomy. The user finds what is needed while the indexing language is used behind the scenes and is transparent to the user.
    The importance of indexing in developing a content navigation strategy for corporate intranets or portals and the value of high-quality indexing when retrieving information from external resources are reviewed in this white paper. Some general background information on indexing and the use of controlled vocabularies (or taxonomies) are included for a historical perspective. Factiva Intelligent Indexing-which incorporates the best indexing expertise from both Dow Jones Interactive and Reuters Business Briefing-is described, along with some novel customer applications that take advantage of Factiva's indexing to create or improve information products delivered to users. Examples from the Excite and Google web search engines and from Dow Jones Interactive and Reuters Business Briefing are included in an Appendix section to illustrate how indexing influences the amount and quality of information retrieved in a specific search.
  7. Harken, S.E.: Subject semantic interoperability. Report of the Subcommittee on Semantic Interoperability to the ALCTS Subject Analysis Committee : Final report (2006) 0.01
    0.010680728 = product of:
      0.048063274 = sum of:
        0.0140020205 = weight(_text_:of in 906) [ClassicSimilarity], result of:
          0.0140020205 = score(doc=906,freq=14.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.22855641 = fieldWeight in 906, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=906)
        0.034061253 = weight(_text_:software in 906) [ClassicSimilarity], result of:
          0.034061253 = score(doc=906,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.21915624 = fieldWeight in 906, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=906)
      0.22222222 = coord(2/9)
    
    Abstract
    The need for improved semantic in teroperability between and among vocabularies and knowledge organization schemes is undeniable and growing in importance. There is an ever-increasing need to create an environment by which even multiple portals could be accessed via subject metadata using software that is neutral and available ubiquitously or directly to the user, that could be copied by libraries for use in their own environment. In order to develop or improve a knowledge organization system including emerging options in semantic interoperability, scholars and practitioners need to be able to evaluate a wide variety of projects and stay current with the professional literature. Based on its findings, the Subcommittee concludes that the development of a successful subject semantic interoperability project is a long and difficult process. It requires a substantial investment of financial, human and computer resources. The Subcommittee recommends using the information and tools in this report and its appendices to assist in developing a successful project incorporating subject semantic interoperability. Finally the Subcommittee concludes that since this field of endeavor is still relatively young and immature, it is too early to generate a set of Best Practices that could be used in developing a successful project. We are past the theoretical and basic research phase and into the development phase. Even though there are some successful projects in full production, more projects need to reach maturity and much more research needs to be done.
    Issue
    Submitted by Chair, Shelby E. Harken, University of North Dakota, approved by SAC June 2006.
  8. Hegner, M.: Methode zur Evaluation von Software (2003) 0.01
    0.010494272 = product of:
      0.047224224 = sum of:
        0.0063507194 = weight(_text_:of in 2499) [ClassicSimilarity], result of:
          0.0063507194 = score(doc=2499,freq=2.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.103663445 = fieldWeight in 2499, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2499)
        0.040873505 = weight(_text_:software in 2499) [ClassicSimilarity], result of:
          0.040873505 = score(doc=2499,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.2629875 = fieldWeight in 2499, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=2499)
      0.22222222 = coord(2/9)
    
    Abstract
    Der Arbeitsbericht befasst sich mit den zwei unterschiedlichen Usability Methoden, dem Usability Test und den Usability Inspektionsmethoden. Bei den Usability Inspektionsmethoden wird die Benutzungsschnittstelle durch Ergonomieexperten evaluiert. Diese Methoden umfassen die Heuristische Evaluation, standard inspection und cognitive walkthrough, etc. Der Vorteil dieser Inspektionsmethoden liegt darin, dass sie weniger zeitintensiv und kostengünstiger als ein Usability Test sind. Der Usability Test wird im Gegensatz zu den Usability Inspektionsmethoden mit repräsentativen Versuchspersonen durchgeführt. Er ist ein effizientes Mittel um Benutzungsschnittstellen zu evaluieren oder auf deren Benutzungsfreundlichkeit zu überprüfen. Des weiteren erläutert der Arbeitsbericht die verschiedenen Usability Testmethoden sowie die Basiselemente zur Durchführung eines Usability Tests. Abschließend wird noch auf die Varianzanalyse (Analysis of Variance, ANOVA) als ein statistisches Verfahren zur Überprüfung von Mittelwertsunterschieden näher eingegangen.
  9. Carey, K.; Stringer, R.: ¬The power of nine : a preliminary investigation into navigation strategies for the new library with special reference to disabled people (2000) 0.01
    0.009899746 = product of:
      0.044548854 = sum of:
        0.012701439 = weight(_text_:of in 234) [ClassicSimilarity], result of:
          0.012701439 = score(doc=234,freq=2.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.20732689 = fieldWeight in 234, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=234)
        0.031847417 = product of:
          0.063694835 = sum of:
            0.063694835 = weight(_text_:22 in 234) [ClassicSimilarity], result of:
              0.063694835 = score(doc=234,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.46428138 = fieldWeight in 234, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=234)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Pages
    22 S
  10. Report on the future of bibliographic control : draft for public comment (2007) 0.01
    0.0091910185 = product of:
      0.04135958 = sum of:
        0.025168357 = weight(_text_:applications in 1271) [ClassicSimilarity], result of:
          0.025168357 = score(doc=1271,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.14592396 = fieldWeight in 1271, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1271)
        0.016191222 = weight(_text_:of in 1271) [ClassicSimilarity], result of:
          0.016191222 = score(doc=1271,freq=52.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.26429096 = fieldWeight in 1271, product of:
              7.2111025 = tf(freq=52.0), with freq of:
                52.0 = termFreq=52.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1271)
      0.22222222 = coord(2/9)
    
    Abstract
    The future of bibliographic control will be collaborative, decentralized, international in scope, and Web-based. Its realization will occur in cooperation with the private sector, and with the active collaboration of library users. Data will be gathered from multiple sources; change will happen quickly; and bibliographic control will be dynamic, not static. The underlying technology that makes this future possible and necessary-the World Wide Web-is now almost two decades old. Libraries must continue the transition to this future without delay in order to retain their relevance as information providers. The Working Group on the Future of Bibliographic Control encourages the library community to take a thoughtful and coordinated approach to effecting significant changes in bibliographic control. Such an approach will call for leadership that is neither unitary nor centralized. Nor will the responsibility to provide such leadership fall solely to the Library of Congress (LC). That said, the Working Group recognizes that LC plays a unique role in the library community of the United States, and the directions that LC takes have great impact on all libraries. We also recognize that there are many other institutions and organizations that have the expertise and the capacity to play significant roles in the bibliographic future. Wherever possible, those institutions must step forward and take responsibility for assisting with navigating the transition and for playing appropriate ongoing roles after that transition is complete. To achieve the goals set out in this document, we must look beyond individual libraries to a system wide deployment of resources. We must realize efficiencies in order to be able to reallocate resources from certain lower-value components of the bibliographic control ecosystem into other higher-value components of that same ecosystem. The recommendations in this report are directed at a number of parties, indicated either by their common initialism (e.g., "LC" for Library of Congress, "PCC" for Program for Cooperative Cataloging) or by their general category (e.g., "Publishers," "National Libraries"). When the recommendation is addressed to "All," it is intended for the library community as a whole and its close collaborators.
    The Library of Congress must begin by prioritizing the recommendations that are directed in whole or in part at LC. Some define tasks that can be achieved immediately and with moderate effort; others will require analysis and planning that will have to be coordinated broadly and carefully. The Working Group has consciously not associated time frames with any of its recommendations. The recommendations fall into five general areas: 1. Increase the efficiency of bibliographic production for all libraries through increased cooperation and increased sharing of bibliographic records, and by maximizing the use of data produced throughout the entire "supply chain" for information resources. 2. Transfer effort into higher-value activity. In particular, expand the possibilities for knowledge creation by "exposing" rare and unique materials held by libraries that are currently hidden from view and, thus, underused. 3. Position our technology for the future by recognizing that the World Wide Web is both our technology platform and the appropriate platform for the delivery of our standards. Recognize that people are not the only users of the data we produce in the name of bibliographic control, but so too are machine applications that interact with those data in a variety of ways. 4. Position our community for the future by facilitating the incorporation of evaluative and other user-supplied information into our resource descriptions. Work to realize the potential of the FRBR framework for revealing and capitalizing on the various relationships that exist among information resources. 5. Strengthen the library profession through education and the development of metrics that will inform decision-making now and in the future. The Working Group intends what follows to serve as a broad blueprint for the Library of Congress and its colleagues in the library and information technology communities for extending and promoting access to information resources.
    Editor
    Library of Congress / Working Group on the Future of Bibliographic Control
  11. Sykes, J.: Making solid business decisions through intelligent indexing taxonomies : a white paper prepared for Factiva, Factiva, a Dow Jones and Reuters Company (2003) 0.01
    0.00853117 = product of:
      0.038390264 = sum of:
        0.01526523 = weight(_text_:of in 721) [ClassicSimilarity], result of:
          0.01526523 = score(doc=721,freq=26.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2491759 = fieldWeight in 721, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=721)
        0.023125032 = weight(_text_:systems in 721) [ClassicSimilarity], result of:
          0.023125032 = score(doc=721,freq=4.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.19207339 = fieldWeight in 721, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03125 = fieldNorm(doc=721)
      0.22222222 = coord(2/9)
    
    Abstract
    In 2000, Factiva published "The Value of Indexing," a white paper emphasizing the strategic importance of accurate categorization, based on a robust taxonomy for later retrieval of documents stored in commercial or in-house content repositories. Since that time, there has been resounding agreement between persons who use Web-based systems and those who design these systems that search engines alone are not the answer for effective information retrieval. High-quality categorization is crucial if users are to be able to find the right answers in repositories of articles and documents that are expanding at phenomenal rates. Companies continue to invest in technologies that will help them organize and integrate their content. A March 2002 article in EContent suggests a typical taxonomy implementation usually costs around $100,000. The article also cites a Merrill Lynch study that predicts the market for search and categorization products, now at about $600 million, will more than double by 2005. Classification activities are not new. In the third century B.C., Callimachus of Cyrene managed the ancient Library of Alexandria. To help scholars find items in the collection, he created an index of all the scrolls organized according to a subject taxonomy. Factiva's parent companies, Dow Jones and Reuters, each have more than 20 years of experience with developing taxonomies and painstaking manual categorization processes and also have a solid history with automated categorization techniques. This experience and expertise put Factiva at the leading edge of developing and applying categorization technology today. This paper will update readers about enhancements made to the Factiva Intelligent IndexingT taxonomy. It examines the value these enhancements bring to Factiva's news and business information service, and the value brought to clients who license the Factiva taxonomy as a fundamental component of their own Enterprise Knowledge Architecture. There is a behind-the-scenes-look at how Factiva classifies a huge stream of incoming articles published in a variety of formats and languages. The paper concludes with an overview of new Factiva services and solutions that are designed specifically to help clients improve productivity and make solid business decisions by precisely finding information in their own everexpanding content repositories.
  12. Forschen für die Internet-Gesellschaft : Trends, Technologien, Anwendungen, Trends und Handlungsempfehlungen 2008 des Feldafinger Kreises (2008) 0.00
    0.004281768 = product of:
      0.03853591 = sum of:
        0.03853591 = weight(_text_:software in 2337) [ClassicSimilarity], result of:
          0.03853591 = score(doc=2337,freq=4.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.24794699 = fieldWeight in 2337, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03125 = fieldNorm(doc=2337)
      0.11111111 = coord(1/9)
    
    Content
    Die Trendaussagen des Feldafinger Kreises von 2008 1. Das Future Internet wird die globale, zuverlässige Plattform für alle Dienste. 2. Peer-to-Peer Networking ermöglicht den Informationsaustausch ohne zentrale Instanz. 3. Software wird zum Bestandteil fast aller Produkte. 4. Sicherheit wird zu einer Grundvoraussetzung für die Akzeptanz von Diensten. 5. Semantische Technologien verwandeln Informationen zu Wissen. 6. Konsequentes Wissensmanagement ist die Basis des Erfolgs von Unternehmen. 7. Intelligente Software-Agenten übernehmen Routineaufgaben. 8. Service Grids bilden das Internet der Dienste. 9. IKT sorgt für Energieeffi zienz und Versorgungssicherheit. 10. Selbstorganisation reduziert die Komplexität und erhöht die Zuverlässigkeit. 11. e-Processes erhöhen die Wettbewerbsfähigkeit durch internetbasierte Geschäftsprozesse. 12. Das Internet der Dinge sorgt für den Informationsaustausch zwischen Gegenständen. 13. Neue Fahrerassistenzsysteme ermöglichen pro-aktive Sicherheit. 14. Vernetzte, digitale Umgebungen unterstützen den Menschen in allen Lebenslagen. 15. Intuitive Bedienparadigmen werden die Nutzung des Internets für alle erleichtern.
  13. Binder, G.; Stahl, M.; Faulborn, L.: Vergleichsuntersuchung MESSENGER-FULCRUM (2000) 0.00
    0.0031795297 = product of:
      0.028615767 = sum of:
        0.028615767 = weight(_text_:systems in 4885) [ClassicSimilarity], result of:
          0.028615767 = score(doc=4885,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.23767869 = fieldWeight in 4885, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4885)
      0.11111111 = coord(1/9)
    
    Abstract
    In einem Benutzertest, der im Rahmen der Projektes GIRT stattfand, wurde die Leistungsfähigkeit zweier Retrievalsprachen für die Datenbankrecherche überprüft. Die Ergebnisse werden in diesem Bericht dargestellt: Das System FULCRUM beruht auf automatischer Indexierung und liefert ein nach statistischer Relevanz sortiertes Suchergebnis. Die Standardfreitextsuche des Systems MESSENGER wurde um die intellektuell vom IZ vergebenen Deskriptoren ergänzt. Die Ergebnisse zeigen, dass in FULCRUM das Boole'sche Exakt-Match-Retrieval dem Verktos-Space-Modell (Best-Match-Verfahren) von den Versuchspersonen vorgezogen wurde. Die in MESSENGER realisierte Mischform aus intellektueller und automatischer Indexierung erwies sich gegenüber dem quantitativ-statistischen Ansatz beim Recall als überlegen
  14. Borghoff, U.M.; Rödig, P.; Schmalhofer, F.: DFG-Projekt Datenbankgestützte Langzeitarchivierung digitaler Objekte : Schlussbericht Juli 2005 - Geschäftszeichen 554 922(1) UV BW Mänchen (2005) 0.00
    0.003027667 = product of:
      0.027249003 = sum of:
        0.027249003 = weight(_text_:software in 4250) [ClassicSimilarity], result of:
          0.027249003 = score(doc=4250,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.17532499 = fieldWeight in 4250, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03125 = fieldNorm(doc=4250)
      0.11111111 = coord(1/9)
    
    Abstract
    Über die letzten Jahrzehnte ist die Menge digitaler Publikationen exponentiell angestiegen. Doch die digitalen Bestände sind durch die schleichende Obsoletheit von Datenformaten, Software und Hardware bedroht. Aber auch die zunehmende Komplexität neuerer Dokumente und zugehöriger Abspielumgebungen stellt ein Problem dar. Das Thema der Langzeitarchivierung wurde lange vernachlässigt, rückt aber zunehmend ins Bewusstsein der Verantwortlichen und der Öffentlichkeit, nicht zuletzt wegen spektakulärer Datenverluste. Ziel dieser Studie ist es, Grundlagen und Bausteine für eine technische Lösung zu entwickeln und deren Einbettung in die Aufgabenbereiche einer Archivierungsorganisation aufzuzeigen. Es fehlt eine systematische Herangehensweise zum Aufbau technischen Wissens, die der Heterogenität und Komplexität sowie der bereits vorhandenen Obsoletheit in der Welt des digitalen Publizierens gerecht wird. In einem ersten Schritt entwickeln wir deshalb ein Modell, das sich spezifisch den technischen Aspekten digitaler Objekte widmet. Dieses Modell erlaubt es, digitale Objekte bezüglich der Archivierungsaspekte zu charakterisieren und zu klassifizieren sowie technische Grundlagen präzise zuzuordnen. Auf dieser Basis können u. a. systematisch modulare Metadatenschemata gewonnen werden, die den Langzeiterhalt gezielt unterstützen. Das Modell liefert außerdem einen Beitrag zur Formulierung von zugehörigen Ontologien. Des Weiteren fördern die Modularität der Metadatenschemata und die einheitliche Begrifflichkeit einer Ontologie die Föderation und Kooperation von Archivierungsorganisationen und -systemen. Die Abstützung auf das entwickelte Modell systematisiert in einem weiteren Schritt die Herleitung von technisch orientierten Prozessen zur Erfüllung von Archivierungsaufgaben. Der Entwicklung eines eigenen Modells liegt die Einschätzung zu Grunde, dass Referenzmodelle, wie OAIS (Open Archival Information System), zwar eine geeignete Ausgangsbasis auf konzeptioneller Ebene bieten, aber sie sind zu generell und beschreiben vor- oder nachgelagerte Prozesse nur als Schnittstelle. Die aus dem Modell hergeleiteten Lösungsansätze sind zunächst unabhängig von einer konkreten Realisierung. Als Beitrag zur Umsetzung wird in einem eigenen Abschnitt der Einsatz von Datenbankmanagementsystemen (DBMS) als Implementierungsbasis ausführlich diskutiert.
  15. Puzicha, J.: Informationen finden! : Intelligente Suchmaschinentechnologie & automatische Kategorisierung (2007) 0.00
    0.002725311 = product of:
      0.0245278 = sum of:
        0.0245278 = weight(_text_:systems in 2817) [ClassicSimilarity], result of:
          0.0245278 = score(doc=2817,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.2037246 = fieldWeight in 2817, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=2817)
      0.11111111 = coord(1/9)
    
    Abstract
    Wie in diesem Text erläutert wurde, ist die Effektivität von Such- und Klassifizierungssystemen durch folgendes bestimmt: 1) den Arbeitsauftrag, 2) die Genauigkeit des Systems, 3) den zu erreichenden Automatisierungsgrad, 4) die Einfachheit der Integration in bereits vorhandene Systeme. Diese Kriterien gehen davon aus, dass jedes System, unabhängig von der Technologie, in der Lage ist, Grundvoraussetzungen des Produkts in Bezug auf Funktionalität, Skalierbarkeit und Input-Methode zu erfüllen. Diese Produkteigenschaften sind in der Recommind Produktliteratur genauer erläutert. Von diesen Fähigkeiten ausgehend sollte die vorhergehende Diskussion jedoch einige klare Trends aufgezeigt haben. Es ist nicht überraschend, dass jüngere Entwicklungen im Maschine Learning und anderen Bereichen der Informatik einen theoretischen Ausgangspunkt für die Entwicklung von Suchmaschinen- und Klassifizierungstechnologie haben. Besonders jüngste Fortschritte bei den statistischen Methoden (PLSA) und anderen mathematischen Werkzeugen (SVMs) haben eine Ergebnisqualität auf Durchbruchsniveau erreicht. Dazu kommt noch die Flexibilität in der Anwendung durch Selbsttraining und Kategorienerkennen von PLSA-Systemen, wie auch eine neue Generation von vorher unerreichten Produktivitätsverbesserungen.
  16. Lubetzky, S.: Principles of cataloging (2001) 0.00
    0.002424508 = product of:
      0.021820573 = sum of:
        0.021820573 = weight(_text_:of in 2627) [ClassicSimilarity], result of:
          0.021820573 = score(doc=2627,freq=34.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.35617945 = fieldWeight in 2627, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2627)
      0.11111111 = coord(1/9)
    
    Abstract
    This report constitutes Phase I of a two-part study; a Phase II report will discuss subject cataloging. Phase I is concerned with the materials of a library as individual records (or documents) and as representations of certain works by certain authors--that is, with descriptive, or bibliographic, cataloging. Discussed in the report are (1) the history, role, function, and oblectives .of the author-and-title catalog; (2) problems and principles of descriptive catalogng, including the use and function of "main entry, the principle of authorship, and the process and problems of cataloging print and nonprint materials; (3) organization of the catalog; and (4) potentialities of automation. The considerations inherent in bibliographic cataloging, such as the distinction between the "book" and the "work," are said to be so elemental that they are essential not only to the effective control of library's materials but also to that of the information contained in the materials. Because of the special concern with information, the author includes a discussion of the "Bibliographic Dimensions of Information Control," 'prepared in collaboration with Robert M. Hayes, which also appears in "American Documentation," VOl.201 July 1969, p. 247-252.
    Imprint
    Los Angeles : California Univ., Inst. of Library Research
  17. Bredemeier, W.; Stock, M.; Stock, W.G.: ¬Die Branche elektronischer Geschäftsinformationen in Deutschland 2000/2001 (2001) 0.00
    0.0022710927 = product of:
      0.020439833 = sum of:
        0.020439833 = weight(_text_:systems in 621) [ClassicSimilarity], result of:
          0.020439833 = score(doc=621,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.1697705 = fieldWeight in 621, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=621)
      0.11111111 = coord(1/9)
    
    Content
    Der deutsche Markt für Elektronische Informationsdienste im Jahre 2000 - Ergebnisse einer Umsatzerhebung - Von Willi Bredemeier: - Abgesicherte Methodologie unter Berücksichtigung der Spezifika des EIS-Marktes und der aktuellen Entwicklung - teilweise Vergleichbarkeit der Daten ab 1989 - Weitgehende quantitative Markttransparenz, da der Leser die Aggregationen der Markt- und Teilmarktdaten aus einzelwirtschaftlichen Daten voll nachvollziehen kann - 93 zum Teil ausführliche Tabellen vorwiegend zu einzelnen Informationsanbietern unter besonderer Berücksichtigung der Geschäftsjahre 2000 und 1999, unterteilt in die Bereiche Gesamtmarkt für Elektronische Informationsdienste, Datev, Realtime-Finanzinformationen, Nachrichtenagenturen, Kreditinformationen, Firmen- und Produktinformationen, weitere Wirtschaftsinformationen, Rechtsinformationen, Wissenschaftlich-technisch-medizinische Informationen - Intellectual Property, Konsumentendienste, Nachbarmärkte - Analyse aktueller Markttrends. Qualität professioneller Firmeninformationen im World Wide Web - Von Mechtild Stock und Wolfgang G. Stock: - Weiterführung der Qualitätsdiskussion und Entwicklung eines Systems von Qualitätskriterien für Informationsangebote, bezogen auf Firmeninformationen im Internet - "Qualitätspanel" für die Bereiche Bonitätsinformationen, Firmenkurzdossiers, Produktinformationen und Adressinformationen mit den Anbietern Bürgel, Creditreform, Dun & Bradstreet Deutschland, ABC online, ALLECO, Hoppenstedt Firmendatenbank, Who is Who in Multimedia, Kompass Deutschland, Sachon Industriedaten, Wer liefert was?, AZ Bertelsmann, Schober.com - Hochdifferenzierte Tests, die den Kunden Hilfen bei der Auswahl zwischen Angeboten und den Anbietern Hinweise auf Maßnahmen zu qualitativen Verbesserungen geben - Detaillierte Informationen über eingesetzte Systeme der Branchen- und Produktklassifikationen - Rankings der Firmeninformationsanbieter insgesamt sowie nach Datenbasen, Retrievalsystemen und Websites, Detailinformationen zu allen Qualitätsdimensionen
  18. De Rosa, C.; Cantrell, J.; Cellentani, D.; Hawk, J.; Jenkins, L.; Wilson, A.: Perceptions of libraries and information resources : A Report to the OCLC Membership (2005) 0.00
    0.0021169065 = product of:
      0.019052157 = sum of:
        0.019052157 = weight(_text_:of in 5018) [ClassicSimilarity], result of:
          0.019052157 = score(doc=5018,freq=18.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.3109903 = fieldWeight in 5018, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=5018)
      0.11111111 = coord(1/9)
    
    Abstract
    Summarizes findings of an international study on information-seeking habits and preferences: With extensive input from hundreds of librarians and OCLC staff, the OCLC Market Research team developed a project and commissioned Harris Interactive Inc. to survey a representative sample of information consumers. In June of 2005, we collected over 3,300 responses from information consumers in Australia, Canada, India, Singapore, the United Kingdom and the United States. The Perceptions report provides the findings and responses from the online survey in an effort to learn more about: * Library use * Awareness and use of library electronic resources * Free vs. for-fee information * The "Library" brand The findings indicate that information consumers view libraries as places to borrow print books, but they are unaware of the rich electronic content they can access through libraries. Even though information consumers make limited use of these resources, they continue to trust libraries as reliable sources of information.
  19. Landry, P.; Zumer, M.; Clavel-Merrin, G.: Report on cross-language subject access options (2006) 0.00
    0.0021169065 = product of:
      0.019052157 = sum of:
        0.019052157 = weight(_text_:of in 2433) [ClassicSimilarity], result of:
          0.019052157 = score(doc=2433,freq=18.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.3109903 = fieldWeight in 2433, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2433)
      0.11111111 = coord(1/9)
    
    Abstract
    This report presents the results of desk-top based study of projects and initiatives in the area of linking and mapping subject tools. While its goal is to provide areas of further study for cross-language subject access in the European Library, and specifically the national libraries of the Ten New Member States, it is not restricted to cross-language mappings since some of the tools used to create links across thesauri or subject headings in the same language may also be appropriate for cross-language mapping. Tools reviewed have been selected to represent a variety of approaches (e.g. subject heading to subject heading, thesaurus to thesaurus, classification to subject heading) reflecting the variety of subject access tools in use in the European Library. The results show that there is no single solution that would be appropriate for all libraries but that parts of several initiatives may be applicable on a technical, organisational or content level.
  20. Adler, R.; Ewing, J.; Taylor, P.: Citation statistics : A report from the International Mathematical Union (IMU) in cooperation with the International Council of Industrial and Applied Mathematics (ICIAM) and the Institute of Mathematical Statistics (IMS) (2008) 0.00
    0.002087298 = product of:
      0.018785682 = sum of:
        0.018785682 = weight(_text_:of in 2417) [ClassicSimilarity], result of:
          0.018785682 = score(doc=2417,freq=70.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.3066406 = fieldWeight in 2417, product of:
              8.3666 = tf(freq=70.0), with freq of:
                70.0 = termFreq=70.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2417)
      0.11111111 = coord(1/9)
    
    Abstract
    This is a report about the use and misuse of citation data in the assessment of scientific research. The idea that research assessment must be done using "simple and objective" methods is increasingly prevalent today. The "simple and objective" methods are broadly interpreted as bibliometrics, that is, citation data and the statistics derived from them. There is a belief that citation statistics are inherently more accurate because they substitute simple numbers for complex judgments, and hence overcome the possible subjectivity of peer review. But this belief is unfounded. - Relying on statistics is not more accurate when the statistics are improperly used. Indeed, statistics can mislead when they are misapplied or misunderstood. Much of modern bibliometrics seems to rely on experience and intuition about the interpretation and validity of citation statistics. - While numbers appear to be "objective", their objectivity can be illusory. The meaning of a citation can be even more subjective than peer review. Because this subjectivity is less obvious for citations, those who use citation data are less likely to understand their limitations. - The sole reliance on citation data provides at best an incomplete and often shallow understanding of research - an understanding that is valid only when reinforced by other judgments. Numbers are not inherently superior to sound judgments.
    Using citation data to assess research ultimately means using citation-based statistics to rank things.journals, papers, people, programs, and disciplines. The statistical tools used to rank these things are often misunderstood and misused. - For journals, the impact factor is most often used for ranking. This is a simple average derived from the distribution of citations for a collection of articles in the journal. The average captures only a small amount of information about that distribution, and it is a rather crude statistic. In addition, there are many confounding factors when judging journals by citations, and any comparison of journals requires caution when using impact factors. Using the impact factor alone to judge a journal is like using weight alone to judge a person's health. - For papers, instead of relying on the actual count of citations to compare individual papers, people frequently substitute the impact factor of the journals in which the papers appear. They believe that higher impact factors must mean higher citation counts. But this is often not the case! This is a pervasive misuse of statistics that needs to be challenged whenever and wherever it occurs. -For individual scientists, complete citation records can be difficult to compare. As a consequence, there have been attempts to find simple statistics that capture the full complexity of a scientist's citation record with a single number. The most notable of these is the h-index, which seems to be gaining in popularity. But even a casual inspection of the h-index and its variants shows that these are naive attempts to understand complicated citation records. While they capture a small amount of information about the distribution of a scientist's citations, they lose crucial information that is essential for the assessment of research.
    The validity of statistics such as the impact factor and h-index is neither well understood nor well studied. The connection of these statistics with research quality is sometimes established on the basis of "experience." The justification for relying on them is that they are "readily available." The few studies of these statistics that were done focused narrowly on showing a correlation with some other measure of quality rather than on determining how one can best derive useful information from citation data. We do not dismiss citation statistics as a tool for assessing the quality of research.citation data and statistics can provide some valuable information. We recognize that assessment must be practical, and for this reason easily-derived citation statistics almost surely will be part of the process. But citation data provide only a limited and incomplete view of research quality, and the statistics derived from citation data are sometimes poorly understood and misused. Research is too important to measure its value with only a single coarse tool. We hope those involved in assessment will read both the commentary and the details of this report in order to understand not only the limitations of citation statistics but also how better to use them. If we set high standards for the conduct of science, surely we should set equally high standards for assessing its quality.
    Imprint
    Joint IMU/ICIAM/IMS-Committee on Quantitative Assessment of Research : o.O.