Search (53 results, page 2 of 3)

  • × type_ss:"a"
  • × type_ss:"el"
  • × year_i:[1990 TO 2000}
  1. Brüggemann-Klein, A.; Klein, R.; Landgraf, B.: BibRelEx : Exploring bibliographic databases by visualization of annotated content-based relations (1999) 0.00
    0.002269176 = product of:
      0.004538352 = sum of:
        0.004538352 = product of:
          0.009076704 = sum of:
            0.009076704 = weight(_text_:a in 1157) [ClassicSimilarity], result of:
              0.009076704 = score(doc=1157,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1709182 = fieldWeight in 1157, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1157)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Traditional searching and browsing functions for bibliographic databases no longer enable users to deal efficiently with the rapidly growing number of scientific publications. The main goal of our project BibRelEx is to develop a new method based on the visualization of content-based relations between documents such as cites, succeeds, improves with respect to. BibRelEx will therefore use these relationships for effective exploration. In addition, BibRelEx will take advantage of the additional insights into the area that can result from the aggregation of expert knowledge, which complements the specialized knowledge represented in the documents themselves. We are preparing to test this approach using a bibliographic database in a specific area of computer science.
    Type
    a
  2. Thiele, H.: ¬The Dublin Core and Warwick framework : a review of the literature, March 1995 - September 1997 (1998) 0.00
    0.002269176 = product of:
      0.004538352 = sum of:
        0.004538352 = product of:
          0.009076704 = sum of:
            0.009076704 = weight(_text_:a in 1254) [ClassicSimilarity], result of:
              0.009076704 = score(doc=1254,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1709182 = fieldWeight in 1254, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1254)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The purpose of this essay is to identify and explore the dynamics of the literature associated with the Dublin Core Workshop Series. The essay opens by identifying the problems that the Dublin Core Workshop Series is addressing, the status of the Internet at the time of the first workshop, and the contributions each workshop has made to the ongoing discussion. The body of the essay describes the characteristics of the literature, highlights key documents, and identifies the major researchers. The essay closes with evaluation of the literary trends and considerations of future research directions. The essay concludes that a shift from a descriptive emphasis to a more empirical form of literature is about to take place. Future research questions are identified in the areas of satisfying searcher needs, the impact of surrogate descriptions on search engine performance, and the effectiveness of surrogate descriptions in authenticating Internet resources.
    Type
    a
  3. Paskin, N.: DOI: current status and outlook (1999) 0.00
    0.0022438213 = product of:
      0.0044876426 = sum of:
        0.0044876426 = product of:
          0.008975285 = sum of:
            0.008975285 = weight(_text_:a in 1245) [ClassicSimilarity], result of:
              0.008975285 = score(doc=1245,freq=22.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.16900843 = fieldWeight in 1245, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1245)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Over the past few months the International DOI Foundation (IDF) has produced a number of discussion papers and other materials about the Digital Object Identifier (DOIsm) initiative. They are all available at the DOI web site, including a brief summary of the DOI origins and purpose. The aim of the present paper is to update those papers, reflecting recent progress, and to provide a summary of the current position and context of the DOI. Although much of the material presented here is the result of a consensus by the organisations forming the International DOI Foundation, some of the points discuss work in progress. The paper describes the origin of the DOI as a persistent identifier for managing copyrighted materials and its development under the non-profit International DOI Foundation into a system providing identifiers of intellectual property with a framework for open applications to be built using them. Persistent identification implementations consistent with URN specifications have up to now been hindered by lack of widespread availability of resolution mechanisms, content typology consensus, and sufficiently flexible infrastructure; DOI attempts to overcome these obstacles. Resolution of the DOI uses the Handle System®, which offers the necessary functionality for open applications. The aim of the International DOI Foundation is to promote widespread applications of the DOI, which it is doing by pioneering some early implementations and by providing an extensible framework to ensure interoperability of future DOI uses. Applications of the DOI will require an interoperable scheme of declared metadata with each DOI; the basis of the DOI metadata scheme is a minimal "kernel" of elements supplemented by additional application-specific elements, under an umbrella data model (derived from the INDECS analysis) that promotes convergence of different application metadata sets. The IDF intends to require declaration of only a minimal set of metadata, sufficient to enable unambiguous look-up of a DOI, but this must be capable of extension by others to create open applications.
    Type
    a
  4. Bearman, D.; Miller, E.; Rust, G.; Trant, J.; Weibel, S.: ¬A common model to support interoperable metadata : progress report on reconciling metadata requirements from the Dublin Core and INDECS/DOI communities (1999) 0.00
    0.0022374375 = product of:
      0.004474875 = sum of:
        0.004474875 = product of:
          0.00894975 = sum of:
            0.00894975 = weight(_text_:a in 1249) [ClassicSimilarity], result of:
              0.00894975 = score(doc=1249,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1685276 = fieldWeight in 1249, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1249)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Dublin Core metadata community and the INDECS/DOI community of authors, rights holders, and publishers are seeking common ground in the expression of metadata for information resources. Recent meetings at the 6th Dublin Core Workshop in Washington DC sketched out common models for semantics (informed by the requirements articulated in the IFLA Functional Requirements for the Bibliographic Record) and conventions for knowledge representation (based on the Resource Description Framework under development by the W3C). Further development of detailed requirements is planned by both communities in the coming months with the aim of fully representing the metadata needs of each. An open "Schema Harmonization" working group has been established to identify a common framework to support interoperability among these communities. The present document represents a starting point identifying historical developments and common requirements of these perspectives on metadata and charts a path for harmonizing their respective conceptual models. It is hoped that collaboration over the coming year will result in agreed semantic and syntactic conventions that will support a high degree of interoperability among these communities, ideally expressed in a single data model and using common, standard tools.
    Type
    a
  5. Lynch, C.A.: ¬The Z39.50 information retrieval standard : part I: a strategic view of its past, present and future (1997) 0.00
    0.002211719 = product of:
      0.004423438 = sum of:
        0.004423438 = product of:
          0.008846876 = sum of:
            0.008846876 = weight(_text_:a in 1262) [ClassicSimilarity], result of:
              0.008846876 = score(doc=1262,freq=38.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.16659044 = fieldWeight in 1262, product of:
                  6.164414 = tf(freq=38.0), with freq of:
                    38.0 = termFreq=38.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1262)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Z39.50 standard for information retrieval is important from a number of perspectives. While still not widely known within the computer networking community, it is a mature standard that represents the culmination of two decades of thinking and debate about how information retrieval functions can be modeled, standardized, and implemented in a distributed systems environment. And - importantly -- it has been tested through substantial deployment experience. Z39.50 is one of the few examples we have to date of a protocol that actually goes beyond codifying mechanism and moves into the area of standardizing shared semantic knowledge. The extent to which this should be a goal of the protocol has been an ongoing source of controversy and tension within the developer community, and differing views on this issue can be seen both in the standard itself and the way that it is used in practice. Given the growing emphasis on issues such as "semantic interoperability" as part of the research agenda for digital libraries (see Clifford A. Lynch and Hector Garcia-Molina. Interoperability, Scaling, and the Digital Libraries Research Agenda, Report on the May 18-19, 1995 IITA Libraries Workshop, <http://www- diglib.stanford.edu/diglib/pub/reports/iita-dlw/main.html>), the insights gained by the Z39.50 community into the complex interactions among various definitions of semantics and interoperability are particularly relevant. The development process for the Z39.50 standard is also of interest in its own right. Its history, dating back to the 1970s, spans a period that saw the eclipse of formal standards-making agencies by groups such as the Internet Engineering Task Force (IETF) and informal standards development consortia. Moreover, in order to achieve meaningful implementation, Z39.50 had to move beyond its origins in the OSI debacle of the 1980s. Z39.50 has also been, to some extent, a victim of its own success -- or at least promise. Recent versions of the standard are highly extensible, and the consensus process of standards development has made it hospitable to an ever-growing set of new communities and requirements. As this process of extension has proceeded, it has become ever less clear what the appropriate scope and boundaries of the protocol should be, and what expectations one should have of practical interoperability among implementations of the standard. Z39.50 thus offers an excellent case study of the problems involved in managing the evolution of a standard over time. It may well offer useful lessons for the future of other standards such as HTTP and HTML, which seem to be facing some of the same issues.
    This paper, which will appear in two parts, starting with this issue of D-Lib, looks at several strategic issues surrounding Z39.50. After a relatively brief overview of the function and history of the protocol, I will examine some of the competing visions of the protocol's role, with emphasis on issues of interoperability and the incorporation of semantics. The second installment of the paper will look at questions related to the management of the standard and the standards development process, with emphasis on the scope of the protocol and how that relates back again to interoperability questions. The paper concludes with a discussion of the adoption and deployment of the standard, its relationship to other standards, and some speculations on future directions for the protocol. This paper is not intended to be a tutorial on the details of how current or past versions of Z39.50 work. These technical details are covered not only in the standard itself (which can admittedly be rather difficult reading) but also in an array of tutorial and review papers (see <http://lcweb.loc.gov/z3950/agency> for bibliographies and pointers to on-line information on Z39.50). Instead, the paper's focus is on how and why Z39.50 developed the way it did, and the conceptual debates that have influenced its evolution and use. While a detailed technical knowledge of the operation of Z39.50 is certainly helpful, it should not be necessary in order to follow most of the material here. Some disclaimers are in order. I have been actively involved in the development of Z39.50 since the early 1980s and have been a participant -- and on occasion, even an instigator -- of some of the activities described here. This paper is an attempt to make a critical assessment of the current state of Z39.50 and a review of its development with the full benefit of hindsight. It recounts a number of debates that occurred within the developer community over the past years. In many of these, I advocated specific positions or approaches, sometimes successfully and sometimes unsuccessfully. What is presented here is one person's perspective - mine --, which is sometimes at odds with the current consensus with the developer community; I've tried to represent opposing views fairly, and to differentiate my opinions from fact or consensus. However, others will undoubtedly disagree with many of the comments here.
    Type
    a
  6. Spink, A.; Wilson, T.; Ellis, D.; Ford, N.: Modeling users' successive searches in digital environments : a National Science Foundation/British Library funded study (1998) 0.00
    0.0021343792 = product of:
      0.0042687585 = sum of:
        0.0042687585 = product of:
          0.008537517 = sum of:
            0.008537517 = weight(_text_:a in 1255) [ClassicSimilarity], result of:
              0.008537517 = score(doc=1255,freq=26.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.16076508 = fieldWeight in 1255, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1255)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    As digital libraries become a major source of information for many people, we need to know more about how people seek and retrieve information in digital environments. Quite commonly, users with a problem-at-hand and associated question-in-mind repeatedly search a literature for answers, and seek information in stages over extended periods from a variety of digital information resources. The process of repeatedly searching over time in relation to a specific, but possibly an evolving information problem (including changes or shifts in a variety of variables), is called the successive search phenomenon. The study outlined in this paper is currently investigating this new and little explored line of inquiry for information retrieval, Web searching, and digital libraries. The purpose of the research project is to investigate the nature, manifestations, and behavior of successive searching by users in digital environments, and to derive criteria for use in the design of information retrieval interfaces and systems supporting successive searching behavior. This study includes two related projects. The first project is based in the School of Library and Information Sciences at the University of North Texas and is funded by a National Science Foundation POWRE Grant <http://www.nsf.gov/cgi-bin/show?award=9753277>. The second project is based at the Department of Information Studies at the University of Sheffield (UK) and is funded by a grant from the British Library <http://www.shef. ac.uk/~is/research/imrg/uncerty.html> Research and Innovation Center. The broad objectives of each project are to examine the nature and extent of successive search episodes in digital environments by real users over time. The specific aim of the current project is twofold: * To characterize progressive changes and shifts that occur in: user situational context; user information problem; uncertainty reduction; user cognitive styles; cognitive and affective states of the user, and consequently in their queries; and * To characterize related changes over time in the type and use of information resources and search strategies particularly related to given capabilities of IR systems, and IR search engines, and examine changes in users' relevance judgments and criteria, and characterize their differences. The study is an observational, longitudinal data collection in the U.S. and U.K. Three questionnaires are used to collect data: reference, client post search and searcher post search questionnaires. Each successive search episode with a search intermediary for textual materials on the DIALOG Information Service is audiotaped and search transaction logs are recorded. Quantitative analysis includes statistical analysis using Likert scale data from the questionnaires and log-linear analysis of sequential data. Qualitative methods include: content analysis, structuring taxonomies; and diagrams to describe shifts and transitions within and between each search episode. Outcomes of the study are the development of appropriate model(s) for IR interactions in successive search episodes and the derivation of a set of design criteria for interfaces and systems supporting successive searching.
    Type
    a
  7. Baker, T.: Languages for Dublin Core (1998) 0.00
    0.0021343792 = product of:
      0.0042687585 = sum of:
        0.0042687585 = product of:
          0.008537517 = sum of:
            0.008537517 = weight(_text_:a in 1257) [ClassicSimilarity], result of:
              0.008537517 = score(doc=1257,freq=26.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.16076508 = fieldWeight in 1257, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1257)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Over the past three years, the Dublin Core Metadata Initiative has achieved a broad international consensus on the semantics of a simple element set for describing electronic resources. Since the first workshop in March 1995, which was reported in the very first issue of D-Lib Magazine, Dublin Core has been the topic of perhaps a dozen articles here. Originally intended to be simple and intuitive enough for authors to tag Web pages without special training, Dublin Core is being adapted now for more specialized uses, from government information and legal deposit to museum informatics and electronic commerce. To meet such specialized requirements, Dublin Core can be customized with additional elements or qualifiers. However, these refinements can compromise interoperability across applications. There are tradeoffs between using specific terms that precisely meet local needs versus general terms that are understood more widely. We can better understand this inevitable tension between simplicity and complexity if we recognize that metadata is a form of human language. With Dublin Core, as with a natural language, people are inclined to stretch definitions, make general terms more specific, specific terms more general, misunderstand intended meanings, and coin new terms. One goal of this paper, therefore, will be to examine the experience of some related ways to seek semantic interoperability through simplicity: planned languages, interlingua constructs, and pidgins. The problem of semantic interoperability is compounded when we consider Dublin Core in translation. All of the workshops, documents, mailing lists, user guides, and working group outputs of the Dublin Core Initiative have been in English. But in many countries and for many applications, people need a metadata standard in their own language. In principle, the broad elements of Dublin Core can be defined equally well in Bulgarian or Hindi. Since Dublin Core is a controlled standard, however, any parallel definitions need to be kept in sync as the standard evolves. Another goal of the paper, then, will be to define the conceptual and organizational problem of maintaining a metadata standard in multiple languages. In addition to a name and definition, which are meant for human consumption, each Dublin Core element has a label, or indexing token, meant for harvesting by search engines. For practical reasons, these machine-readable tokens are English-looking strings such as Creator and Subject (just as HTML tags are called HEAD, BODY, or TITLE). These tokens, which are shared by Dublin Cores in every language, ensure that metadata fields created in any particular language are indexed together across repositories. As symbols of underlying universal semantics, these tokens form the basis of semantic interoperability among the multiple Dublin Cores. As long as we limit ourselves to sharing these indexing tokens among exact translations of a simple set of fifteen broad elements, the definitions of which fit easily onto two pages, the problem of Dublin Core in multiple languages is straightforward. But nothing having to do with human language is ever so simple. Just as speakers of various languages must learn the language of Dublin Core in their own tongues, we must find the right words to talk about a metadata language that is expressable in many discipline-specific jargons and natural languages and that inevitably will evolve and change over time.
    Type
    a
  8. Oard, D.W.: Alternative approaches for cross-language text retrieval (1997) 0.00
    0.0021343792 = product of:
      0.0042687585 = sum of:
        0.0042687585 = product of:
          0.008537517 = sum of:
            0.008537517 = weight(_text_:a in 1164) [ClassicSimilarity], result of:
              0.008537517 = score(doc=1164,freq=26.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.16076508 = fieldWeight in 1164, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1164)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The explosive growth of the Internet and other sources of networked information have made automatic mediation of access to networked information sources an increasingly important problem. Much of this information is expressed as electronic text, and it is becoming practical to automatically convert some printed documents and recorded speech to electronic text as well. Thus, automated systems capable of detecting useful documents are finding widespread application. With even a small number of languages it can be inconvenient to issue the same query repeatedly in every language, so users who are able to read more than one language will likely prefer a multilingual text retrieval system over a collection of monolingual systems. And since reading ability in a language does not always imply fluent writing ability in that language, such users will likely find cross-language text retrieval particularly useful for languages in which they are less confident of their ability to express their information needs effectively. The use of such systems can be also be beneficial if the user is able to read only a single language. For example, when only a small portion of the document collection will ever be examined by the user, performing retrieval before translation can be significantly more economical than performing translation before retrieval. So when the application is sufficiently important to justify the time and effort required for translation, those costs can be minimized if an effective cross-language text retrieval system is available. Even when translation is not available, there are circumstances in which cross-language text retrieval could be useful to a monolingual user. For example, a researcher might find a paper published in an unfamiliar language useful if that paper contains references to works by the same author that are in the researcher's native language.
    Multilingual text retrieval can be defined as selection of useful documents from collections that may contain several languages (English, French, Chinese, etc.). This formulation allows for the possibility that individual documents might contain more than one language, a common occurrence in some applications. Both cross-language and within-language retrieval are included in this formulation, but it is the cross-language aspect of the problem which distinguishes multilingual text retrieval from its well studied monolingual counterpart. At the SIGIR 96 workshop on "Cross-Linguistic Information Retrieval" the participants discussed the proliferation of terminology being used to describe the field and settled on "Cross-Language" as the best single description of the salient aspect of the problem. "Multilingual" was felt to be too broad, since that term has also been used to describe systems able to perform within-language retrieval in more than one language but that lack any cross-language capability. "Cross-lingual" and "cross-linguistic" were felt to be equally good descriptions of the field, but "crosslanguage" was selected as the preferred term in the interest of standardization. Unfortunately, at about the same time the U.S. Defense Advanced Research Projects Agency (DARPA) introduced "translingual" as their preferred term, so we are still some distance from reaching consensus on this matter.
    I will not attempt to draw a sharp distinction between retrieval and filtering in this survey. Although my own work on adaptive cross-language text filtering has led me to make this distinction fairly carefully in other presentations (c.f., (Oard 1997b)), such an proach does little to help understand the fundamental techniques which have been applied or the results that have been obtained in this case. Since it is still common to view filtering (detection of useful documents in dynamic document streams) as a kind of retrieval, will simply adopt that perspective here.
    Type
    a
  9. Fife, E.D.; Husch, L.: ¬The Mathematics Archives : making mathematics easy to find on the Web (1999) 0.00
    0.0020714647 = product of:
      0.0041429293 = sum of:
        0.0041429293 = product of:
          0.008285859 = sum of:
            0.008285859 = weight(_text_:a in 1239) [ClassicSimilarity], result of:
              0.008285859 = score(doc=1239,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15602624 = fieldWeight in 1239, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1239)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Do a search on AltaVista for "algebra". What do you get? Nearly 700,000 hits, of which AltaVista will allow you to view only what it determines is the top 200. Major search engines such as AltaVista, Excite, HotBot, Lycos, and the like continue to provide a valuable service, but with the recent growth of the Internet, topic-specific sites that provide some organization to the topic are increasingly important. It the goal of the Mathematics Archives to make it easier for the ordinary user to find useful mathematical information on the Web. The Mathematics Archives (http://archives.math.utk.edu) is a multipurpose site for mathematics on the Internet. The focus is on materials which can be used in mathematics education (primarily at the undergraduate level). Resources available range from shareware and public domain software to electronic proceedings of various conferences, to an extensive collection of annotated links to other mathematical sites. All materials on the Archives are categorized and cross referenced for the convenience of the user. Several search mechanisms are provided. The Harvest search engine is implemented to provide a full text search of most of the pages on the Archives. The software we house and our list of annotated links to mathematical sites are both categorized by subject matter. Each of these collections has a specialized search engine to assist the user in locating desired material. Services at the Mathematics Archives are divided up into five broad topics: * Links organized by Mathematical Topics * Software * Teaching Materials * Other Math Archives Features * Other Links
    Type
    a
  10. Weibel, S.: ¬The State of the Dublin Core Metadata Initiative April 1999 (1999) 0.00
    0.0020506454 = product of:
      0.004101291 = sum of:
        0.004101291 = product of:
          0.008202582 = sum of:
            0.008202582 = weight(_text_:a in 6066) [ClassicSimilarity], result of:
              0.008202582 = score(doc=6066,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1544581 = fieldWeight in 6066, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6066)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    One hundred and one experts in resource description convened in Washington, D.C., November 2 through November 4, 1998, for the sixth Dublin Core Metadata Workshop. The registrants represented 16 countries on 4 continents, and many disciplines. As with previous workshops, many new issues were opened, and vigorous debate was a hallmark of the event. Unlike previous workshops, the focus of DC-6 was not to resolve questions in plenary meetings, but rather to identify unresolved issues and assign them to formal working groups for resolution. The result of this process was an ambitious workplan for 1999. This report summarizes that workplan, highlights the progress that has made been on the workplan, and identifies a few significant projects that exemplify this progress.
    Type
    a
  11. GERHARD : eine Spezialsuchmaschine für die Wissenschaft (1998) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 381) [ClassicSimilarity], result of:
              0.008118451 = score(doc=381,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 381, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=381)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  12. Stanley, T.: Alta Vista vs. Lycos (1996) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 3939) [ClassicSimilarity], result of:
              0.008118451 = score(doc=3939,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 3939, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3939)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  13. Van de Sompel, H.; Hochstenbach, P.: Reference linking in a hybrid library environment : part 1: frameworks for linking (1999) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 1244) [ClassicSimilarity], result of:
              0.008118451 = score(doc=1244,freq=18.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 1244, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1244)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The creation of services linking related information entities is an area that is attracting an ever increasing interest in the ongoing development of the World Wide Web in general, and of research-related information systems in particular. Currently, both practice and theory point at linking services as being a major domain for innovation enabled by digital communication of content. Publishers, subscription agents, researchers and libraries are all looking into ways to create added value by linking related information entities, as such presenting the information within a broader context estimated to be relevant to the users of the information. This is the first of two articles in D-Lib Magazine on this topic. This first part describes the current state-of-the-art and contrasts various approaches to the problem. It identifies static and dynamic linking solutions as well as open and closed linking frameworks. It also includes an extensive bibliography. The second part, SFX, a Generic Linking Solution describes a system that we have developed for linking in a hybrid working environment. The creation of services linking related information entities is an area that is attracting an ever increasing interest in the ongoing development of the World Wide Web in general, and of research-related information systems in particular. Although most writings on electronic scientific communication have touted other benefits, such as the increase in communication speed, the possibility to exchange multimedia content and the absence of limitations on the length of research papers, currently both practice and theory point at linking services as being a major opportunity for improved communication of content. Publishers, subscription agents, researchers and libraries are all looking into ways to create added-value by linking related information entities, as such presenting the information within a broader context estimated to be relevant to the users of the information.
    Type
    a
  14. Atkins, H.: ¬The ISI® Web of Science® - links and electronic journals : how links work today in the Web of Science, and the challenges posed by electronic journals (1999) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 1246) [ClassicSimilarity], result of:
              0.008118451 = score(doc=1246,freq=18.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 1246, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1246)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Since their inception in the early 1960s the strength and unique aspect of the ISI citation indexes has been their ability to illustrate the conceptual relationships between scholarly documents. When authors create reference lists for their papers, they make explicit links between their own, current work and the prior work of others. The exact nature of these links may not be expressed in the references themselves, and the motivation behind them may vary (this has been the subject of much discussion over the years), but the links embodied in references do exist. Over the past 30+ years, technology has allowed ISI to make the presentation of citation searching increasingly accessible to users of our products. Citation searching and link tracking moved from being rather cumbersome in print, to being direct and efficient (albeit non-intuitive) online, to being somewhat more user-friendly in CD format. But it is the confluence of the hypertext link and development of Web browsers that has enabled us to present to users a new form of citation product -- the Web of Science -- that is intuitive and makes citation indexing conceptually accessible. A cited reference search begins with a known, important (or at least relevant) document used as the search term. The search allows one to identify subsequent articles that have cited that document. This feature adds the dimension of prospective searching to the usual retrospective searching that all bibliographic indexes provide. Citation indexing is a prime example of a concept before its time - important enough to be used in the meantime by those sufficiently motivated, but just waiting for the right technology to come along to expand its use. While it was possible to follow citation links in earlier citation index formats, this required a level of effort on the part of users that was often just too much to ask of the casual user. In the citation indexes as presented in the Web of Science, the relationship between citing and cited documents is evident to users, and a click of the mouse is all it takes to follow a citation link. Citation connections are established between the published papers being indexed from the 8,000+ journals ISI covers and the items their reference lists contain during the data capture process. It is the standardized capture of each of the references included with these documents that enables us to provide the citation searching feature in all the citation index formats, as well as both internal and external links in the Web of Science.
    Type
    a
  15. Daniel Jr., R.; Lagoze, C.: Extending the Warwick framework : from metadata containers to active digital objects (1997) 0.00
    0.0019633435 = product of:
      0.003926687 = sum of:
        0.003926687 = product of:
          0.007853374 = sum of:
            0.007853374 = weight(_text_:a in 1264) [ClassicSimilarity], result of:
              0.007853374 = score(doc=1264,freq=22.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.14788237 = fieldWeight in 1264, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1264)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Defining metadata as "data about data" provokes more questions than it answers. What are the forms of the data and metadata? Can we be more specific about the manner in which the metadata is "about" the data? Are data and metadata distinguished only in the context of their relationship? Is the nature of the relationship between the datasets declarative or procedural? Can the metadata itself be described by other data? Over the past several years, we have been engaged in a number of efforts examining the role, format, composition, and architecture of metadata for networked resources. During this time, we have noticed the tendency to be led astray by comfortable, but somewhat inappropriate, models in the non-digital information environment. Rather than pursuing familiar models, there is the need for a new model that fully exploits the unique combination of computation and connectivity that characterizes the digital library. In this paper, we describe an extension of the Warwick Framework that we call Distributed Active Relationships (DARs). DARs provide a powerful model for representing data and metadata in digital library objects. They explicitly express the relationships between networked resources, and even allow those relationships to be dynamically downloadable and executable. The DAR model is based on the following principles, which our examination of the "data about data" definition has led us to regard as axiomatic: * There is no essential distinction between data and metadata. We can only make such a distinction in terms of a particular "about" relationship. As a result, what is metadata in the context of one "about" relationship may be data in another. * There is no single "about" relationship. There are many different and important relationships between data resources. * Resources can be related without regard for their location. The connectivity in networked information architectures makes it possible to have data in one repository describe data in another repository. * The computational power of the networked information environment makes it possible to consider active or dynamic relationships between data sets. This adds considerable power to the "data about data" definition. First, data about another data set may not physically exist, but may be automatically derived. Second, the "about" relationship may be an executable object -- in a sense interpretable metadata. As will be shown, this provides useful mechanisms for handling complex metadata problems such as rights management of digital objects. The remainder of this paper describes the development and consequences of the DAR model. Section 2 reviews the Warwick Framework, which is the basis for the model described in this paper. Section 3 examines the concept of the Warwick Framework Catalog, which provides a mechanism for expressing the relationships between the packages in a Warwick Framework container. With that background established, section 4 generalizes the Warwick Framework by removing the restriction that it only contains "metadata". This allows us to consider digital library objects that are aggregations of (possibly distributed) data sets, with the relationships between the data sets expressed using a Warwick Framework Catalog. Section 5 further extends the model by describing Distributed Active Relationships (DARs). DARs are the explicit relationships that have the potential to be executable, as alluded to earlier. Finally, section 6 describes two possible implementations of these concepts.
    Type
    a
  16. Roszkowski, M.; Lukas, C.: ¬A distributed architecture for resource discovery using metadata (1998) 0.00
    0.001913537 = product of:
      0.003827074 = sum of:
        0.003827074 = product of:
          0.007654148 = sum of:
            0.007654148 = weight(_text_:a in 1256) [ClassicSimilarity], result of:
              0.007654148 = score(doc=1256,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.14413087 = fieldWeight in 1256, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1256)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article describes an approach for linking geographically distributed collections of metadata so that they are searchable as a single collection. We describe the infrastructure, which uses standard Internet protocols such as the Lightweight Directory Access Protocol (LDAP) and the Common Indexing Protocol (CIP), to distribute queries, return results, and exchange index information. We discuss the advantages of using linked collections of authoritative metadata as an alternative to using a keyword indexing search-engine for resource discovery. We examine other architectures that use metadata for resource discovery, such as Dienst/NCSTRL, the AHDS HTTP/Z39.50 Gateway, and the ROADS initiative. Finally, we discuss research issues and future directions of the project. The Internet Scout Project, which is funded by the National Science Foundation and is located in the Computer Sciences Department at the University of Wisconsin-Madison, is charged with assisting the higher education community in resource discovery on the Internet. To that end, the Scout Report and subsequent subject-specific Scout Reports were developed to guide the U.S. higher education community to research-quality resources. The Scout Report Signpost utilizes the content from the Scout Reports as the basis of a metadata collection. Signpost consists of more than 2000 cataloged Internet sites using established standards such as Library of Congress subject headings and abbreviated call letters, and emerging standards such as the Dublin Core (DC). This searchable and browseable collection is free and freely accessible, as are all of the Internet Scout Project's services.
    As well developed as both the Scout Reports and Signpost are, they cannot capture the wealth of high-quality content that is available on the Internet. An obvious next step toward increasing the usefulness of our own collection and its value to our customer base is to partner with other high-quality content providers who have developed similar collections and to develop a single, virtual collection. Project Isaac (working title) is the Internet Scout Project's latest resource discovery effort. Project Isaac involves the development of a research testbed that allows experimentation with protocols and algorithms for creating, maintaining, indexing and searching distributed collections of metadata. Project Isaac's infrastructure uses standard Internet protocols, such as the Lightweight Directory Access Protocol (LDAP) and the Common Indexing Protocol (CIP) to distribute queries, return results, and exchange index or centroid information. The overall goal is to support a single-search interface to geographically distributed and independently maintained metadata collections.
    Type
    a
  17. Van de Sompel, H.; Hochstenbach, P.: Reference linking in a hybrid library environment : part 3: generalizing the SFX solution in the "SFX@Ghent & SFX@LANL" experiment (1999) 0.00
    0.0017899501 = product of:
      0.0035799001 = sum of:
        0.0035799001 = product of:
          0.0071598003 = sum of:
            0.0071598003 = weight(_text_:a in 1243) [ClassicSimilarity], result of:
              0.0071598003 = score(doc=1243,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.13482209 = fieldWeight in 1243, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1243)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This is the third part of our papers about reference linking in a hybrid library environment. The first part described the state-of-the-art of reference linking and contrasted various approaches to the problem. It identified static and dynamic linking solutions, open and closed linking frameworks as well as just-in-case and just-in-time linking. The second part introduced SFX, a dynamic, just-in-time linking solution we built for our own purposes. However, we suggested that the underlying concepts were sufficiently generic to be applied in a wide range of digital libraries. In this third part we show how this has been demonstrated conclusively in the "SFX@Ghent & SFX@LANL" experiment. In this experiment, local as well as remote distributed information resources of the digital library collections of the Research Library of the Los Alamos National Laboratory and the University of Ghent Library have been used as starting points for SFX-links into other parts of the collections. The SFX-framework has further been generalized in order to achieve a technology that can easily be transferred from one digital library environment to another and that minimizes the overhead in making the distributed information services that make up those libraries interoperable with SFX. This third part starts with a presentation of the SFX problem statement in light of the recent discussions on reference linking. Next, it introduces the notion of global and local relevance of extended services as well as an architectural categorization of open linking frameworks, also referred to as frameworks that are supportive of selective resolution. Then, an in-depth description of the generalized SFX solution is given.
    Type
    a
  18. Buckland, M.; Chen, A.; Chen, H.M.; Kim, Y.; Lam, B.; Larson, R.; Norgard, B.; Purat, J.; Gey, F.: Mapping entry vocabulary to unfamiliar metadata vocabularies (1999) 0.00
    0.001757696 = product of:
      0.003515392 = sum of:
        0.003515392 = product of:
          0.007030784 = sum of:
            0.007030784 = weight(_text_:a in 1238) [ClassicSimilarity], result of:
              0.007030784 = score(doc=1238,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.13239266 = fieldWeight in 1238, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1238)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The emerging network environment brings access to an increasing population of heterogeneous repositories. Inevitably, these, have quite diverse metadata vocabularies (categorization codes, classification numbers, index and thesaurus terms). So, necessarily, the number of metadata vocabularies that are accessible but unfamiliar for any individual searcher is increasing steeply. When an unfamiliar metadata vocabulary is encountered, how is a searcher to know which codes or terms will lead to what is wanted? This paper reports work at the University of California, Berkeley, on the design and development of English language indexes to metadata vocabularies. Further details and the current status of the work can be found at the project website http://www.sims.berkeley.edu/research/metadata/
    Type
    a
  19. Overton, R.: Search engines get faster and faster, but not always better (1996) 0.00
    0.0016913437 = product of:
      0.0033826875 = sum of:
        0.0033826875 = product of:
          0.006765375 = sum of:
            0.006765375 = weight(_text_:a in 5669) [ClassicSimilarity], result of:
              0.006765375 = score(doc=5669,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12739488 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5669)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  20. Oard, D.W.: Serving users in many languages : cross-language information retrieval for digital libraries (1997) 0.00
    0.0016913437 = product of:
      0.0033826875 = sum of:
        0.0033826875 = product of:
          0.006765375 = sum of:
            0.006765375 = weight(_text_:a in 1261) [ClassicSimilarity], result of:
              0.006765375 = score(doc=1261,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12739488 = fieldWeight in 1261, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1261)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We are rapidly constructing an extensive network infrastructure for moving information across national boundaries, but much remains to be done before linguistic barriers can be surmounted as effectively as geographic ones. Users seeking information from a digital library could benefit from the ability to query large collections once using a single language, even when more than one language is present in the collection. If the information they locate is not available in a language that they can read, some form of translation will be needed. At present, multilingual thesauri such as EUROVOC help to address this challenge by facilitating controlled vocabulary search using terms from several languages, and services such as INSPEC produce English abstracts for documents in other languages. On the other hand, support for free text searching across languages is not yet widely deployed, and fully automatic machine translation is presently neither sufficiently fast nor sufficiently accurate to adequately support interactive cross-language information seeking. An active and rapidly growing research community has coalesced around these and other related issues, applying techniques drawn from several fields - notably information retrieval and natural language processing - to provide access to large multilingual collections.
    Type
    a

Languages

  • e 46
  • d 7