Search (389 results, page 20 of 20)

  • × type_ss:"el"
  1. Beagle, D.: Visualizing keyword distribution across multidisciplinary c-space (2003) 0.00
    0.0040839324 = product of:
      0.008167865 = sum of:
        0.008167865 = product of:
          0.01633573 = sum of:
            0.01633573 = weight(_text_:systems in 1202) [ClassicSimilarity], result of:
              0.01633573 = score(doc=1202,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.1018623 = fieldWeight in 1202, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1202)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The concept of c-space is proposed as a visualization schema relating containers of content to cataloging surrogates and classification structures. Possible applications of keyword vector clusters within c-space could include improved retrieval rates through the use of captioning within visual hierarchies, tracings of semantic bleeding among subclasses, and access to buried knowledge within subject-neutral publication containers. The Scholastica Project is described as one example, following a tradition of research dating back to the 1980's. Preliminary focus group assessment indicates that this type of classification rendering may offer digital library searchers enriched entry strategies and an expanded range of re-entry vocabularies. Those of us who work in traditional libraries typically assume that our systems of classification: Library of Congress Classification (LCC) and Dewey Decimal Classification (DDC), are descriptive rather than prescriptive. In other words, LCC classes and subclasses approximate natural groupings of texts that reflect an underlying order of knowledge, rather than arbitrary categories prescribed by librarians to facilitate efficient shelving. Philosophical support for this assumption has traditionally been found in a number of places, from the archetypal tree of knowledge, to Aristotelian categories, to the concept of discursive formations proposed by Michel Foucault. Gary P. Radford has elegantly described an encounter with Foucault's discursive formations in the traditional library setting: "Just by looking at the titles on the spines, you can see how the books cluster together...You can identify those books that seem to form the heart of the discursive formation and those books that reside on the margins. Moving along the shelves, you see those books that tend to bleed over into other classifications and that straddle multiple discursive formations. You can physically and sensually experience...those points that feel like state borders or national boundaries, those points where one subject ends and another begins, or those magical places where one subject has morphed into another..."
  2. Brand, A.: CrossRef turns one (2001) 0.00
    0.0040839324 = product of:
      0.008167865 = sum of:
        0.008167865 = product of:
          0.01633573 = sum of:
            0.01633573 = weight(_text_:systems in 1222) [ClassicSimilarity], result of:
              0.01633573 = score(doc=1222,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.1018623 = fieldWeight in 1222, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1222)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Citation linking is thus also a huge benefit to journal publishers, because, as with electronic bookselling, it drives readers to their content in yet another way. In step with what was largely a subscription-based economy for journal sales, an "article economy" appears to be emerging. Journal publishers sell an increasing amount of their content on an article basis, whether through document delivery services, aggregators, or their own pay-per-view systems. At the same time, most research-oriented access to digitized material is still mediated by libraries. Resource discovery services must be able to authenticate subscribed or licensed users somewhere in the process, and ensure that a given user is accessing as a default the version of an article that their library may have already paid for. The well-known "appropriate copy" issue is addressed below. Another benefit to publishers from including outgoing citation links is simply the value they can add to their own journals. Publishers carry out the bulk of the technological prototyping and development that has produced electronic journals and the enhanced functionality readers have come to expect. There is clearly competition among them to provide readers with the latest features. That a number of publishers would agree to collaborate in the establishment of an infrastructure for reference linking was thus by no means predictable. CrossRef was incorporated in January of 2000 as a collaborative venture among 12 of the world's top scientific and scholarly publishers, both commercial and not-for-profit, to enable cross-publisher reference linking throughout the digital journal literature. The founding members were Academic Press, a Harcourt Company; the American Association for the Advancement of Science (the publisher of Science); American Institute of Physics (AIP); Association for Computing Machinery (ACM); Blackwell Science; Elsevier Science; The Institute of Electrical and Electronics Engineers, Inc. (IEEE); Kluwer Academic Publishers (a Wolters Kluwer Company); Nature; Oxford University Press; Springer-Verlag; and John Wiley & Sons, Inc. Start-up funds for CrossRef were provided as loans from eight of the original publishers.
  3. Dolin, R.; Agrawal, D.; El Abbadi, A.; Pearlman, J.: Using automated classification for summarizing and selecting heterogeneous information sources (1998) 0.00
    0.0040839324 = product of:
      0.008167865 = sum of:
        0.008167865 = product of:
          0.01633573 = sum of:
            0.01633573 = weight(_text_:systems in 1253) [ClassicSimilarity], result of:
              0.01633573 = score(doc=1253,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.1018623 = fieldWeight in 1253, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1253)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Information retrieval over the Internet increasingly requires the filtering of thousands of heterogeneous information sources. Important sources of information include not only traditional databases with structured data and queries, but also increasing numbers of non-traditional, semi- or unstructured collections such as Web sites, FTP archives, etc. As the number and variability of sources increases, new ways of automatically summarizing, discovering, and selecting collections relevant to a user's query are needed. One such method involves the use of classification schemes, such as the Library of Congress Classification (LCC), within which a collection may be represented based on its content, irrespective of the structure of the actual data or documents. For such a system to be useful in a large-scale distributed environment, it must be easy to use for both collection managers and users. As a result, it must be possible to classify documents automatically within a classification scheme. Furthermore, there must be a straightforward and intuitive interface with which the user may use the scheme to assist in information retrieval (IR). Our work with the Alexandria Digital Library (ADL) Project focuses on geo-referenced information, whether text, maps, aerial photographs, or satellite images. As a result, we have emphasized techniques which work with both text and non-text, such as combined textual and graphical queries, multi-dimensional indexing, and IR methods which are not solely dependent on words or phrases. Part of this work involves locating relevant online sources of information. In particular, we have designed and are currently testing aspects of an architecture, Pharos, which we believe will scale up to 1.000.000 heterogeneous sources. Pharos accommodates heterogeneity in content and format, both among multiple sources as well as within a single source. That is, we consider sources to include Web sites, FTP archives, newsgroups, and full digital libraries; all of these systems can include a wide variety of content and multimedia data formats. Pharos is based on the use of hierarchical classification schemes. These include not only well-known 'subject' (or 'concept') based schemes such as the Dewey Decimal System and the LCC, but also, for example, geographic classifications, which might be constructed as layers of smaller and smaller hierarchical longitude/latitude boxes. Pharos is designed to work with sophisticated queries which utilize subjects, geographical locations, temporal specifications, and other types of information domains. The Pharos architecture requires that hierarchically structured collection metadata be extracted so that it can be partitioned in such a way as to greatly enhance scalability. Automated classification is important to Pharos because it allows information sources to extract the requisite collection metadata automatically that must be distributed.
  4. Lynch, C.A.: ¬The Z39.50 information retrieval standard : part I: a strategic view of its past, present and future (1997) 0.00
    0.0040839324 = product of:
      0.008167865 = sum of:
        0.008167865 = product of:
          0.01633573 = sum of:
            0.01633573 = weight(_text_:systems in 1262) [ClassicSimilarity], result of:
              0.01633573 = score(doc=1262,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.1018623 = fieldWeight in 1262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1262)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Z39.50 standard for information retrieval is important from a number of perspectives. While still not widely known within the computer networking community, it is a mature standard that represents the culmination of two decades of thinking and debate about how information retrieval functions can be modeled, standardized, and implemented in a distributed systems environment. And - importantly -- it has been tested through substantial deployment experience. Z39.50 is one of the few examples we have to date of a protocol that actually goes beyond codifying mechanism and moves into the area of standardizing shared semantic knowledge. The extent to which this should be a goal of the protocol has been an ongoing source of controversy and tension within the developer community, and differing views on this issue can be seen both in the standard itself and the way that it is used in practice. Given the growing emphasis on issues such as "semantic interoperability" as part of the research agenda for digital libraries (see Clifford A. Lynch and Hector Garcia-Molina. Interoperability, Scaling, and the Digital Libraries Research Agenda, Report on the May 18-19, 1995 IITA Libraries Workshop, <http://www- diglib.stanford.edu/diglib/pub/reports/iita-dlw/main.html>), the insights gained by the Z39.50 community into the complex interactions among various definitions of semantics and interoperability are particularly relevant. The development process for the Z39.50 standard is also of interest in its own right. Its history, dating back to the 1970s, spans a period that saw the eclipse of formal standards-making agencies by groups such as the Internet Engineering Task Force (IETF) and informal standards development consortia. Moreover, in order to achieve meaningful implementation, Z39.50 had to move beyond its origins in the OSI debacle of the 1980s. Z39.50 has also been, to some extent, a victim of its own success -- or at least promise. Recent versions of the standard are highly extensible, and the consensus process of standards development has made it hospitable to an ever-growing set of new communities and requirements. As this process of extension has proceeded, it has become ever less clear what the appropriate scope and boundaries of the protocol should be, and what expectations one should have of practical interoperability among implementations of the standard. Z39.50 thus offers an excellent case study of the problems involved in managing the evolution of a standard over time. It may well offer useful lessons for the future of other standards such as HTTP and HTML, which seem to be facing some of the same issues.
  5. Proceedings of the 2nd International Workshop on Semantic Digital Archives held in conjunction with the 16th Int. Conference on Theory and Practice of Digital Libraries (TPDL) on September 27, 2012 in Paphos, Cyprus (2012) 0.00
    0.0040839324 = product of:
      0.008167865 = sum of:
        0.008167865 = product of:
          0.01633573 = sum of:
            0.01633573 = weight(_text_:systems in 468) [ClassicSimilarity], result of:
              0.01633573 = score(doc=468,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.1018623 = fieldWeight in 468, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=468)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Archival Information Systems (AIS) are becoming increasingly important. For decades, the amount of content created digitally is growing and its complete life cycle nowadays tends to remain digital. A selection of this content is expected to be of value for the future and can thus be considered being part of our cultural heritage. However, digital content poses many challenges for long-term or indefinite preservation, e.g. digital publications become increasingly complex by the embedding of different kinds of multimedia, data in arbitrary formats and software. As soon as these digital publications become obsolete, but are still deemed to be of value in the future, they have to be transferred smoothly into appropriate AIS where they need to be kept accessible even through changing technologies. The successful previous SDA workshop in 2011 showed: Both, the library and the archiving community have made valuable contributions to the management of huge amounts of knowledge and data. However, both are approaching this topic from different views which shall be brought together to cross-fertilize each other. There are promising combinations of pertinence and provenance models since those are traditionally the prevailing knowledge organization principles of the library and archiving community, respectively. Another scientific discipline providing promising technical solutions for knowledge representation and knowledge management is semantic technologies, which is supported by appropriate W3C recommendations and a large user community. At the forefront of making the semantic web a mature and applicable reality is the linked data initiative, which already has started to be adopted by the library community. It can be expected that using semantic (web) technologies in general and linked data in particular can mature the area of digital archiving as well as technologically tighten the natural bond between digital libraries and digital archives. Semantic representations of contextual knowledge about cultural heritage objects will enhance organization and access of data and knowledge. In order to achieve a comprehensive investigation, the information seeking and document triage behaviors of users (an area also classified under the field of Human Computer Interaction) will also be included in the research.
  6. Paskin, N.: Identifier interoperability : a report on two recent ISO activities (2006) 0.00
    0.0034032771 = product of:
      0.0068065543 = sum of:
        0.0068065543 = product of:
          0.013613109 = sum of:
            0.013613109 = weight(_text_:systems in 1179) [ClassicSimilarity], result of:
              0.013613109 = score(doc=1179,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.08488525 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1179)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    There has been continuing discussion over a number of years within ISO TC46 SC9 of the need for interoperability between the various standard identifiers for which this committee is responsible. However, the nature of what that interoperability might mean - and how it might be achieved - has not been well explored. Considerable amounts of work have been done on standardising the identification schemes within each media sector, by creating standard identifiers that can be used within that sector. Equally, much work has been done on creating standard or reference metadata sets that can be used to associate key metadata descriptors with content. Much less work has been done on the impact of cross-sector working. Relatively little is understood about the effect of using one industry's identifiers in another industry, or on attempting to import metadata from one identification scheme into a system based on another. In the long term it is clear that interoperability of all these media identifiers and metadata schemes will be required. What is not clear is what initial steps are likely to deliver this soonest. Under the auspices of ISO TC46, an ad hoc group of representatives of TC46 SC9 Registration Authorities and invited experts met in London in late 2005, in a facilitated workshop funded by the registration agencies (RAs) responsible for ISAN, ISWC, ISRC, ISAN and DOI, to develop definitions and use cases, with the intention of providing a framework within which a more structured exploration of the issues might be undertaken. A report of the workshop prepared by Mark Bide of Rightscom Ltd. was used as the input for a wider discussion at the ISO TC46 meeting held in Thailand in February 2006, at which ISO TC46/SC9 agreed that Registration Authorities for ISRC, ISWC, ISAN, ISBN, ISSN and ISMN and the proposed RAs for ISTC and DOI should continue working on common issues relating to interoperability of identifier systems developed within TC46/SC9; some of the use cases have been selected for further in-depth investigation, in parallel with discussions on potential solutions.
  7. Markoff, J.: Researchers announce advance in image-recognition software (2014) 0.00
    0.0034032771 = product of:
      0.0068065543 = sum of:
        0.0068065543 = product of:
          0.013613109 = sum of:
            0.013613109 = weight(_text_:systems in 1875) [ClassicSimilarity], result of:
              0.013613109 = score(doc=1875,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.08488525 = fieldWeight in 1875, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1875)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Computer vision specialists said that despite the improvements, these software systems had made only limited progress toward the goal of digitally duplicating human vision and, even more elusive, understanding. "I don't know that I would say this is 'understanding' in the sense we want," said John R. Smith, a senior manager at I.B.M.'s T.J. Watson Research Center in Yorktown Heights, N.Y. "I think even the ability to generate language here is very limited." But the Google and Stanford teams said that they expect to see significant increases in accuracy as they improve their software and train these programs with larger sets of annotated images. A research group led by Tamara L. Berg, a computer scientist at the University of North Carolina at Chapel Hill, is training a neural network with one million images annotated by humans. "You're trying to tell the story behind the image," she said. "A natural scene will be very complex, and you want to pick out the most important objects in the image.""
  8. Vocht, L. De: Exploring semantic relationships in the Web of Data : Semantische relaties verkennen in data op het web (2017) 0.00
    0.0034032771 = product of:
      0.0068065543 = sum of:
        0.0068065543 = product of:
          0.013613109 = sum of:
            0.013613109 = weight(_text_:systems in 4232) [ClassicSimilarity], result of:
              0.013613109 = score(doc=4232,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.08488525 = fieldWeight in 4232, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=4232)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    After the launch of the World Wide Web, it became clear that searching documentson the Web would not be trivial. Well-known engines to search the web, like Google, focus on search in web documents using keywords. The documents are structured and indexed to ensure keywords match documents as accurately as possible. However, searching by keywords does not always suice. It is oen the case that users do not know exactly how to formulate the search query or which keywords guarantee retrieving the most relevant documents. Besides that, it occurs that users rather want to browse information than looking up something specific. It turned out that there is need for systems that enable more interactivity and facilitate the gradual refinement of search queries to explore the Web. Users expect more from the Web because the short keyword-based queries they pose during search, do not suffice for all cases. On top of that, the Web is changing structurally. The Web comprises, apart from a collection of documents, more and more linked data, pieces of information structured so they can be processed by machines. The consequently applied semantics allow users to exactly indicate machines their search intentions. This is made possible by describing data following controlled vocabularies, concept lists composed by experts, published uniquely identifiable on the Web. Even so, it is still not trivial to explore data on the Web. There is a large variety of vocabularies and various data sources use different terms to identify the same concepts.
  9. Dodge, M.: ¬A map of Yahoo! (2000) 0.00
    0.0027226217 = product of:
      0.0054452433 = sum of:
        0.0054452433 = product of:
          0.010890487 = sum of:
            0.010890487 = weight(_text_:systems in 1555) [ClassicSimilarity], result of:
              0.010890487 = score(doc=1555,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.0679082 = fieldWeight in 1555, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1555)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Research Prototypes Visual SiteMap Developed by Xia Lin, based at the College of Library and Information Science, Drexel University. CVG Cyberspace geography visualization, developed by Luc Girardin, at The Graduate Institute of International Studies, Switzerland. WEBSOM Maps the thousands of articles posted on Usenet newsgroups. It is being developed by researchers at the Neural Networks Research Centre, Helsinki University of Technology in Finland. TreeMaps Developed by Brian Johnson, Ben Shneiderman and colleagues in the Human-Computer Interaction Lab at the University of Maryland. Commercial Information Maps: NewsMaps Provides interactive information landscapes summarizing daily news stories, developed Cartia, Inc. Web Squirrel Creates maps known as information farms. It is developed by Eastgate Systems, Inc. Umap Produces interactive maps of Web searches. Map of the Market An interactive map of the market performance of the stocks of major US corporations developed by SmartMoney.com."

Years

Languages

  • e 274
  • d 102
  • a 3
  • el 2
  • i 2
  • nl 1
  • More… Less…

Types

  • a 180
  • i 11
  • x 10
  • s 8
  • m 7
  • r 6
  • b 2
  • n 1
  • p 1
  • More… Less…

Themes