Search (50 results, page 3 of 3)

  • × language_ss:"e"
  • × type_ss:"r"
  1. Crawford, J.C.; Thorn, L.C.; Powles, J.A.: ¬A survey of subject access to academic library catalogues in Great Britain : a report to the British Library Research and Development Department (1992) 0.01
    0.009529176 = product of:
      0.019058352 = sum of:
        0.019058352 = product of:
          0.038116705 = sum of:
            0.038116705 = weight(_text_:systems in 367) [ClassicSimilarity], result of:
              0.038116705 = score(doc=367,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.23767869 = fieldWeight in 367, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=367)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The study of subject access to UK academic library catalogues was based on a questionnaires end out during Summer 1991. 86 out of a possible 110 questionnaires were returned. All universities and polytechniques now have OPACs which are progressing well towards comprehensive bibliographical coverage of their libraries' stocks. The MARC format is now widely used. Subject access strategies are usually based on either Library of Congress Subject Headings or inhouse indexing systems but almost half the OPACs studies have no separate subject searching option based on subject indexing is expensive and future subject indexing strategies are best based on pre-existing controlled vocabularies. Strategies authority control is essential. A limited range of software strategies is recommended including the need to limit search results
  2. Hildebrand, M.; Ossenbruggen, J. van; Hardman, L.: ¬An analysis of search-based user interaction on the Semantic Web (2007) 0.01
    0.009529176 = product of:
      0.019058352 = sum of:
        0.019058352 = product of:
          0.038116705 = sum of:
            0.038116705 = weight(_text_:systems in 59) [ClassicSimilarity], result of:
              0.038116705 = score(doc=59,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.23767869 = fieldWeight in 59, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=59)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Many Semantic Web applications provide access to their resources through text-based search queries, using explicit semantics to improve the search results. This paper provides an analysis of the current state of the art in semantic search, based on 35 existing systems. We identify different types of semantic search features that are used during query construction, the core search process, the presentation of the search results and user feedback on query and results. For each of these, we consider the functionality that the system provides and how this is made available through the user interface.
  3. Barker, P.: ¬An examination of the use of the OSI Directory for accessing bibliographic information : project ABDUX (1993) 0.01
    0.008167865 = product of:
      0.01633573 = sum of:
        0.01633573 = product of:
          0.03267146 = sum of:
            0.03267146 = weight(_text_:systems in 7310) [ClassicSimilarity], result of:
              0.03267146 = score(doc=7310,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2037246 = fieldWeight in 7310, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=7310)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Describes the work of the ABDUX project, containing a brief description of the rationale for using X.500 for access to bibliographic information. Outlines the project's design work and a demonstration system. Reviews the standards applicable to bibliographic data and library OPACs. Highlights difficulties found when handling bibliographic data in library systems. Discusses the service requirements of OPACs for accessing bibliographic, discussing how X.500 Directory services may be used. Suggests the DIT structures that coulb be used for storing both bibliographic information and descriptions on information resources in general in the directory. Describes the way in which the model of bibliographic data is presented. Outlines the syntax of ASN.1 and how records and fields may be described in terms of X.500 object classes and attribute types. Details the mapping of MARC format into an X.500 compatible form. Provides the schema information for representing research notes and archives, not covered by MARC definitions. Examines the success in implementing the designs and loos ahead to future possibilities
  4. SARA (SGML Aware Retrieval Application) Workshop, 19th June 1994 (1994) 0.01
    0.008167865 = product of:
      0.01633573 = sum of:
        0.01633573 = product of:
          0.03267146 = sum of:
            0.03267146 = weight(_text_:systems in 756) [ClassicSimilarity], result of:
              0.03267146 = score(doc=756,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2037246 = fieldWeight in 756, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=756)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Summarizes the workshop, held in Oxford, 19th Jun 94, to launch SARA, the SGML Aware Retrieval Application, a sophisticated searching and retrieval software product development as part of the British National Corpus (BNC) project to allow rapid and sophisticated analysis of the BNC and other text materials encoded using SGML, and to allow the academic community access to BNC as easily as possible. The British National Corpus is a 3 year project to build a 100 million word corpus of contemporary (mostly post 1974) spoken and written English, taken from a range of sources, including fiction and non fiction books, academic periodicals, unpublished materials, radio broadcasts, and transcriptions of spoken conversations. The entire tagged corpus is due to be released in 1994 and is expected to be used for purposes such as: reference book publishing; linguistic research; and the development of systems for natural langugae processing and artificial intelligence
  5. Ramsden, A.; Wu, Z.; Zhao, D.G.: ¬The pilot phase of the ELINOR Electronic Library Project, March 1992-April 1994 (1994) 0.01
    0.008167865 = product of:
      0.01633573 = sum of:
        0.01633573 = product of:
          0.03267146 = sum of:
            0.03267146 = weight(_text_:systems in 2618) [ClassicSimilarity], result of:
              0.03267146 = score(doc=2618,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2037246 = fieldWeight in 2618, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2618)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Describes the ELINOR (Electronic Library INformation Online Retrieval) Electronic Library Project, at De Montfort University, UK, which aims to convert library primary materials and course documents to electronic form and to make the full text documents accessible to teaching staff and students in an electronic workstation environment. This pilot phase of the ELINOR Electronic Library Project demonstrated the feasibility of collecting electronic documents for 1 undergraduate course, BA/BSc Business Information Systems (BIS), and the benefits of optical character recognition (OCR) and scanning and document image processing (DIP) techniques in a client server environment. A key feature of the project was the negotiation of short term licences from 11 publishers for 53 textbooks. Publishers were prepared to participate in the project to provide useful early experience in copyright management on small scale
  6. Multilingual information management : current levels and future abilities. A report Commissioned by the US National Science Foundation and also delivered to the European Commission's Language Engineering Office and the US Defense Advanced Research Projects Agency, April 1999 (1999) 0.01
    0.007700737 = product of:
      0.015401474 = sum of:
        0.015401474 = product of:
          0.030802948 = sum of:
            0.030802948 = weight(_text_:systems in 6068) [ClassicSimilarity], result of:
              0.030802948 = score(doc=6068,freq=4.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.19207339 = fieldWeight in 6068, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.03125 = fieldNorm(doc=6068)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This picture will rapidly change. The twin challenges of massive information overload via the web and ubiquitous computers present us with an unavoidable task: developing techniques to handle multilingual and multi-modal information robustly and efficiently, with as high quality performance as possible. The most effective way for us to address such a mammoth task, and to ensure that our various techniques and applications fit together, is to start talking across the artificial research boundaries. Extending the current technologies will require integrating the various capabilities into multi-functional and multi-lingual natural language systems. However, at this time there is no clear vision of how these technologies could or should be assembled into a coherent framework. What would be involved in connecting a speech recognition system to an information retrieval engine, and then using machine translation and summarization software to process the retrieved text? How can traditional parsing and generation be enhanced with statistical techniques? What would be the effect of carefully crafted lexicons on traditional information retrieval? At which points should machine translation be interleaved within information retrieval systems to enable multilingual processing?
  7. Sykes, J.: Making solid business decisions through intelligent indexing taxonomies : a white paper prepared for Factiva, Factiva, a Dow Jones and Reuters Company (2003) 0.01
    0.007700737 = product of:
      0.015401474 = sum of:
        0.015401474 = product of:
          0.030802948 = sum of:
            0.030802948 = weight(_text_:systems in 721) [ClassicSimilarity], result of:
              0.030802948 = score(doc=721,freq=4.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.19207339 = fieldWeight in 721, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.03125 = fieldNorm(doc=721)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In 2000, Factiva published "The Value of Indexing," a white paper emphasizing the strategic importance of accurate categorization, based on a robust taxonomy for later retrieval of documents stored in commercial or in-house content repositories. Since that time, there has been resounding agreement between persons who use Web-based systems and those who design these systems that search engines alone are not the answer for effective information retrieval. High-quality categorization is crucial if users are to be able to find the right answers in repositories of articles and documents that are expanding at phenomenal rates. Companies continue to invest in technologies that will help them organize and integrate their content. A March 2002 article in EContent suggests a typical taxonomy implementation usually costs around $100,000. The article also cites a Merrill Lynch study that predicts the market for search and categorization products, now at about $600 million, will more than double by 2005. Classification activities are not new. In the third century B.C., Callimachus of Cyrene managed the ancient Library of Alexandria. To help scholars find items in the collection, he created an index of all the scrolls organized according to a subject taxonomy. Factiva's parent companies, Dow Jones and Reuters, each have more than 20 years of experience with developing taxonomies and painstaking manual categorization processes and also have a solid history with automated categorization techniques. This experience and expertise put Factiva at the leading edge of developing and applying categorization technology today. This paper will update readers about enhancements made to the Factiva Intelligent IndexingT taxonomy. It examines the value these enhancements bring to Factiva's news and business information service, and the value brought to clients who license the Factiva taxonomy as a fundamental component of their own Enterprise Knowledge Architecture. There is a behind-the-scenes-look at how Factiva classifies a huge stream of incoming articles published in a variety of formats and languages. The paper concludes with an overview of new Factiva services and solutions that are designed specifically to help clients improve productivity and make solid business decisions by precisely finding information in their own everexpanding content repositories.
  8. Ward, S.: Networked CD-ROMs as academic information sources : the growth of networked electronic information sources in academic libraries (1993) 0.01
    0.0068065543 = product of:
      0.013613109 = sum of:
        0.013613109 = product of:
          0.027226217 = sum of:
            0.027226217 = weight(_text_:systems in 7233) [ClassicSimilarity], result of:
              0.027226217 = score(doc=7233,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.1697705 = fieldWeight in 7233, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=7233)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Examines the place of CD-ROMs in academic libraries and in particular the use of network solutions to widen the access to these services. A questionnaire survey was undertaken of academic libraries in the UK, Eire, USA and Canada and the results analysed. 9 UK libraries were selected as case studies for more detailed examinations of the issues involved: in particular the management issues. These case studies were selected to cover a variety of experiences and circumstances. A parallel survey looked at CD-ROM publishing and a questionnaire survey was sent to publishers but the response was not as good as the previous survey. Trends in CD-ROM publishing, including the future of CD-ROM in the views of the publishers and of librarians, suggest that other electronic media may replace CD-ROM for some applications but that CD-ROM is likely to remain a part of hybrid information systems. The networking of CD-ROM services is constrained by the cost, by technical complexity, and by restrictive licensing agreements. Future electronic information services may include regionally or nationally mounted databases accessible over the Internet or over SuperJanet in the UK. Issues such as the electronic library or the virtual library, and document delivery services are likely to gain prominence
  9. Coles, B.R.: ¬The scientific, technical and medical information system in the UK : a study on behalf of the Royal Society, the British Library and the Association of Learned and Professional Society Publishers (1993) 0.01
    0.0068065543 = product of:
      0.013613109 = sum of:
        0.013613109 = product of:
          0.027226217 = sum of:
            0.027226217 = weight(_text_:systems in 5350) [ClassicSimilarity], result of:
              0.027226217 = score(doc=5350,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.1697705 = fieldWeight in 5350, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5350)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Reports on the findings of the major study, carried out by the Royal Society, etc. and prompted by the concern felt about the increasing signs of strain in the scientific, technical and medical information systems (STM) and the consequences for technical research. The report is a follow up to the earlier study (BLRDD report 5626) with the aim of covering the trends which have developed since the earlier report was published (online information retrieval, electronic networks, CD-ROM etc.). The study covers: the nature of the UK scientific, technical and medical information system; users of the STM information system; the changing role of libraries and librarians with regard to periodicals, books and other services; economic aspects of the STM information system (research libraries, primary publishing, secondary publishing, and value of scientific research); economic aspects of the STM information system, and perceived problems and potential changes with regard to primary periodicals, electronic periodicals, user interaction, and funding of the services. The data derived from the user survey and the library survey are published in full with analysis. Presents the conclusions and recommendations arising from the study
  10. ALA / Subcommittee on Subject Relationships/Reference Structures: Final Report to the ALCTS/CCS Subject Analysis Committee (1997) 0.01
    0.0067381454 = product of:
      0.013476291 = sum of:
        0.013476291 = product of:
          0.026952581 = sum of:
            0.026952581 = weight(_text_:systems in 1800) [ClassicSimilarity], result of:
              0.026952581 = score(doc=1800,freq=4.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.16806422 = fieldWeight in 1800, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1800)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The SAC Subcommittee on Subject Relationships/Reference Structures was authorized at the 1995 Midwinter Meeting and appointed shortly before Annual Conference. Its creation was one result of a discussion of how (and why) to promote the display and use of broader-term subject heading references, and its charge reads as follows: To investigate: (1) the kinds of relationships that exist between subjects, the display of which are likely to be useful to catalog users; (2) how these relationships are or could be recorded in authorities and classification formats; (3) options for how these relationships should be presented to users of online and print catalogs, indexes, lists, etc. By the summer 1996 Annual Conference, make some recommendations to SAC about how to disseminate the information and/or implement changes. At that time assess the need for additional time to investigate these issues. The Subcommittee's work on each of the imperatives in the charge was summarized in a report issued at the 1996 Annual Conference (Appendix A). Highlights of this work included the development of a taxonomy of 165 subject relationships; a demonstration that, using existing MARC coding, catalog systems could be programmed to generate references they do not currently support; and an examination of reference displays in several CD-ROM database products. Since that time, work has continued on identifying term relationships and display options; on tracking research, discussion, and implementation of subject relationships in information systems; and on compiling a list of further research needs.

Types