Search (51 results, page 3 of 3)

  • × type_ss:"r"
  • × year_i:[2000 TO 2010}
  1. Euzenat, J.; Bach, T.Le; Barrasa, J.; Bouquet, P.; Bo, J.De; Dieng, R.; Ehrig, M.; Hauswirth, M.; Jarrar, M.; Lara, R.; Maynard, D.; Napoli, A.; Stamou, G.; Stuckenschmidt, H.; Shvaiko, P.; Tessaris, S.; Acker, S. Van; Zaihrayeu, I.: State of the art on ontology alignment (2004) 0.00
    0.001682769 = product of:
      0.010096614 = sum of:
        0.010096614 = weight(_text_:in in 172) [ClassicSimilarity], result of:
          0.010096614 = score(doc=172,freq=16.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.17003182 = fieldWeight in 172, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=172)
      0.16666667 = coord(1/6)
    
    Abstract
    In this document we provide an overall view of the state of the art in ontology alignment. It is organised as a description of the need for ontology alignment, a presentation of the techniques currently in use for ontology alignment and a presentation of existing systems. The state of the art is not restricted to any discipline and consider as some form of ontology alignment the work made on schema matching within the database area for instance. Heterogeneity problems on the semantic web can be solved, for some of them, by aligning heterogeneous ontologies. This is illustrated through a number of use cases of ontology alignment. Aligning ontologies consists of providing the corresponding entities in these ontologies. This process is precisely defined in deliverable D2.2.1. The current deliverable presents the many techniques currently used for implementing this process. These techniques are classified along the many features that can be found in ontologies (labels, structures, instances, semantics). They resort to many different disciplines such as statistics, machine learning or data analysis. The alignment itself is obtained by combining these techniques towards a particular goal (obtaining an alignment with particular features, optimising some criterion). Several combination techniques are also presented. Finally, these techniques have been experimented in various systems for ontology alignment or schema matching. Several such systems are presented briefly in the last section and characterized by the above techniques they rely on. The conclusion is that many techniques are available for achieving ontology alignment and many systems have been developed based on these techniques. However, few comparisons and few integration is actually provided by these implementations. This deliverable serves as a basis for considering further action along these two lines. It provide a first inventory of what should be evaluated and suggests what evaluation criterion can be used.
  2. Babeu, A.: Building a "FRBR-inspired" catalog : the Perseus digital library experience (2008) 0.00
    0.001682769 = product of:
      0.010096614 = sum of:
        0.010096614 = weight(_text_:in in 2429) [ClassicSimilarity], result of:
          0.010096614 = score(doc=2429,freq=16.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.17003182 = fieldWeight in 2429, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=2429)
      0.16666667 = coord(1/6)
    
    Abstract
    If one follows any of the major cataloging or library blogs these days, it is obvious that the topic of FRBR (Functional Requirements for Bibliographic Records) has increasingly become one of major significance for the library community. What began as a proposed conceptual entity-relationship model for improving the structure of bibliographic records has become a hotly debated topic with many tangled threads that have implications not just for cataloging but for many aspects of libraries and librarianship. In the fall of 2005, the Perseus Project experimented with creating a FRBRized catalog for its current online classics collection, a collection that consists of several hundred classical texts in Greek and Latin as well as reference works and scholarly commentaries regarding these works. In the last two years, with funding from the Mellon Foundation, Perseus has amassed and digitized a growing collection of classical texts (some as image books on our own servers that will eventually be made available through Fedora), and some available through the Open Content Alliance (OCA)2, and created FRBRized cataloging data for these texts. This work was done largely as an experiment to see the potential of the FRBR model for creating a specialized catalog for classics.
    Our catalog should not be called a FRBR catalog perhaps, but instead a "FRBR Inspired catalog." As such our main goal has been "practical findability," we are seeking to support the four identified user tasks of the FRBR model, or to "Search, Identify, Select, and Obtain," rather than to create a FRBR catalog, per se. By encoding as much information as possible in the MODS and MADS records we have created, we believe that useful searching will be supported, that by using unique identifiers for works and authors users will be able to identify that the entity they have located is the desired one, that by encoding expression level information (such as the language of the work, the translator, etc) users will be able to select which expression of a work they are interested in, and that by supplying links to different online manifestations that users will be able to obtain access to a digital copy of a work. This white paper will discuss previous and current efforts by the Perseus Project in creating a FRBRized catalog, including the cataloging workflow, lessons learned during the process and will also seek to place this work in the larger context of research regarding FRBR, cataloging, Library 2.0 and the Semantic Web, and the growing importance of the FRBR model in the face of growing million book digital libraries.
  3. Lubetzky, S.: Principles of cataloging (2001) 0.00
    0.0016629322 = product of:
      0.009977593 = sum of:
        0.009977593 = weight(_text_:in in 2627) [ClassicSimilarity], result of:
          0.009977593 = score(doc=2627,freq=10.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.16802745 = fieldWeight in 2627, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2627)
      0.16666667 = coord(1/6)
    
    Abstract
    This report constitutes Phase I of a two-part study; a Phase II report will discuss subject cataloging. Phase I is concerned with the materials of a library as individual records (or documents) and as representations of certain works by certain authors--that is, with descriptive, or bibliographic, cataloging. Discussed in the report are (1) the history, role, function, and oblectives .of the author-and-title catalog; (2) problems and principles of descriptive catalogng, including the use and function of "main entry, the principle of authorship, and the process and problems of cataloging print and nonprint materials; (3) organization of the catalog; and (4) potentialities of automation. The considerations inherent in bibliographic cataloging, such as the distinction between the "book" and the "work," are said to be so elemental that they are essential not only to the effective control of library's materials but also to that of the information contained in the materials. Because of the special concern with information, the author includes a discussion of the "Bibliographic Dimensions of Information Control," 'prepared in collaboration with Robert M. Hayes, which also appears in "American Documentation," VOl.201 July 1969, p. 247-252.
  4. Calhoun, K.: ¬The changing nature of the catalog and its integration with other discovery tools : Prepared for the Library of Congress (2006) 0.00
    0.0015740865 = product of:
      0.009444519 = sum of:
        0.009444519 = weight(_text_:in in 5013) [ClassicSimilarity], result of:
          0.009444519 = score(doc=5013,freq=14.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.15905021 = fieldWeight in 5013, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=5013)
      0.16666667 = coord(1/6)
    
    Abstract
    The destabilizing influences of the Web, widespread ownership of personal computers, and rising computer literacy have created an era of discontinuous change in research libraries a time when the cumulated assets of the past do not guarantee future success. The library catalog is such an asset. Today, a large and growing number of students and scholars routinely bypass library catalogs in favor of other discovery tools, and the catalog represents a shrinking proportion of the universe of scholarly information. The catalog is in decline, its processes and structures are unsustainable, and change needs to be swift. At the same time, books and serials are not dead, and they are not yet digital. Notwithstanding widespread expansion of digitization projects, ubiquitous e-journals, and a market that seems poised to move to e-books, the role of catalog records in discovery and retrieval of the world's library collections seems likely to continue for at least a couple of decades and probably longer. This report, commissioned by the Library of Congress (LC), offers an analysis of the current situation, options for revitalizing research library catalogs, a feasibility assessment, a vision for change, and a blueprint for action. Library decision makers are the primary audience for this report, whose aim is to elicit support, dialogue, collaboration, and movement toward solutions. Readers from the business community, particularly those that directly serve libraries, may find the report helpful for defining research and development efforts. The same is true for readers from membership organizations such as OCLC Online Computer Library Center, the Research Libraries Group, the Association for Research Libraries, the Council on Library and Information Resources, the Coalition for Networked Information, and the Digital Library Federation. Library managers and practitioners from all functional groups are likely to take an interest in the interview findings and in specific actions laid out in the blueprint.
  5. De Rosa, C.; Cantrell, J.; Cellentani, D.; Hawk, J.; Jenkins, L.; Wilson, A.: Perceptions of libraries and information resources : A Report to the OCLC Membership (2005) 0.00
    0.0015457221 = product of:
      0.009274333 = sum of:
        0.009274333 = weight(_text_:in in 5018) [ClassicSimilarity], result of:
          0.009274333 = score(doc=5018,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1561842 = fieldWeight in 5018, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=5018)
      0.16666667 = coord(1/6)
    
    Abstract
    Summarizes findings of an international study on information-seeking habits and preferences: With extensive input from hundreds of librarians and OCLC staff, the OCLC Market Research team developed a project and commissioned Harris Interactive Inc. to survey a representative sample of information consumers. In June of 2005, we collected over 3,300 responses from information consumers in Australia, Canada, India, Singapore, the United Kingdom and the United States. The Perceptions report provides the findings and responses from the online survey in an effort to learn more about: * Library use * Awareness and use of library electronic resources * Free vs. for-fee information * The "Library" brand The findings indicate that information consumers view libraries as places to borrow print books, but they are unaware of the rich electronic content they can access through libraries. Even though information consumers make limited use of these resources, they continue to trust libraries as reliable sources of information.
  6. Cataloging culutural objects : a guide to describing cultural works and their images (2003) 0.00
    0.0014873719 = product of:
      0.008924231 = sum of:
        0.008924231 = weight(_text_:in in 2398) [ClassicSimilarity], result of:
          0.008924231 = score(doc=2398,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.15028831 = fieldWeight in 2398, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2398)
      0.16666667 = coord(1/6)
    
    Abstract
    It may be jumping the gun a bit to review this publication before it is actually published, but we are nothing if not current here at Current Cites, so we will do it anyway (so sue us!). This publication-in-process is a joint effort of the Visual Resources Association and the Digital Library Federation. It aims to "provide guidelines for selecting, ordering, and formatting data used to populate catalog records" relating to cultural works. Although this work is far from finished (Chapters 1, 2, 7, and 9 are available, as well as front and back matter), the authors are making it available so pratictioners can use it and respond with information about how it can be improved to better aid their work. A stated goal is to publish it in print at some point in the future. Besides garnering support from the organizations named above as well as the Getty, the Mellon Foundation and others, the effort is being guided by experienced professionals at the top of their field. Get the point? If you're involved with creating metadata relating to any type of cultural object and/or images of such, this will need to be either on your bookshelf, or bookmarked in your browser, or both
  7. Kamvar, S.; Haveliwala, T.; Golub, G.: Adaptive methods for the computation of PageRank (2003) 0.00
    0.0014724231 = product of:
      0.008834538 = sum of:
        0.008834538 = weight(_text_:in in 2560) [ClassicSimilarity], result of:
          0.008834538 = score(doc=2560,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.14877784 = fieldWeight in 2560, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2560)
      0.16666667 = coord(1/6)
    
    Abstract
    We observe that the convergence patterns of pages in the PageRank algorithm have a nonuniform distribution. Specifically, many pages converge to their true PageRank quickly, while relatively few pages take a much longer time to converge. Furthermore, we observe that these slow-converging pages are generally those pages with high PageRank.We use this observation to devise a simple algorithm to speed up the computation of PageRank, in which the PageRank of pages that have converged are not recomputed at each iteration after convergence. This algorithm, which we call Adaptive PageRank, speeds up the computation of PageRank by nearly 30%.
  8. Adler, R.; Ewing, J.; Taylor, P.: Citation statistics : A report from the International Mathematical Union (IMU) in cooperation with the International Council of Industrial and Applied Mathematics (ICIAM) and the Institute of Mathematical Statistics (IMS) (2008) 0.00
    0.0013386346 = product of:
      0.008031808 = sum of:
        0.008031808 = weight(_text_:in in 2417) [ClassicSimilarity], result of:
          0.008031808 = score(doc=2417,freq=18.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.13525948 = fieldWeight in 2417, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2417)
      0.16666667 = coord(1/6)
    
    Abstract
    This is a report about the use and misuse of citation data in the assessment of scientific research. The idea that research assessment must be done using "simple and objective" methods is increasingly prevalent today. The "simple and objective" methods are broadly interpreted as bibliometrics, that is, citation data and the statistics derived from them. There is a belief that citation statistics are inherently more accurate because they substitute simple numbers for complex judgments, and hence overcome the possible subjectivity of peer review. But this belief is unfounded. - Relying on statistics is not more accurate when the statistics are improperly used. Indeed, statistics can mislead when they are misapplied or misunderstood. Much of modern bibliometrics seems to rely on experience and intuition about the interpretation and validity of citation statistics. - While numbers appear to be "objective", their objectivity can be illusory. The meaning of a citation can be even more subjective than peer review. Because this subjectivity is less obvious for citations, those who use citation data are less likely to understand their limitations. - The sole reliance on citation data provides at best an incomplete and often shallow understanding of research - an understanding that is valid only when reinforced by other judgments. Numbers are not inherently superior to sound judgments.
    Using citation data to assess research ultimately means using citation-based statistics to rank things.journals, papers, people, programs, and disciplines. The statistical tools used to rank these things are often misunderstood and misused. - For journals, the impact factor is most often used for ranking. This is a simple average derived from the distribution of citations for a collection of articles in the journal. The average captures only a small amount of information about that distribution, and it is a rather crude statistic. In addition, there are many confounding factors when judging journals by citations, and any comparison of journals requires caution when using impact factors. Using the impact factor alone to judge a journal is like using weight alone to judge a person's health. - For papers, instead of relying on the actual count of citations to compare individual papers, people frequently substitute the impact factor of the journals in which the papers appear. They believe that higher impact factors must mean higher citation counts. But this is often not the case! This is a pervasive misuse of statistics that needs to be challenged whenever and wherever it occurs. -For individual scientists, complete citation records can be difficult to compare. As a consequence, there have been attempts to find simple statistics that capture the full complexity of a scientist's citation record with a single number. The most notable of these is the h-index, which seems to be gaining in popularity. But even a casual inspection of the h-index and its variants shows that these are naive attempts to understand complicated citation records. While they capture a small amount of information about the distribution of a scientist's citations, they lose crucial information that is essential for the assessment of research.
    The validity of statistics such as the impact factor and h-index is neither well understood nor well studied. The connection of these statistics with research quality is sometimes established on the basis of "experience." The justification for relying on them is that they are "readily available." The few studies of these statistics that were done focused narrowly on showing a correlation with some other measure of quality rather than on determining how one can best derive useful information from citation data. We do not dismiss citation statistics as a tool for assessing the quality of research.citation data and statistics can provide some valuable information. We recognize that assessment must be practical, and for this reason easily-derived citation statistics almost surely will be part of the process. But citation data provide only a limited and incomplete view of research quality, and the statistics derived from citation data are sometimes poorly understood and misused. Research is too important to measure its value with only a single coarse tool. We hope those involved in assessment will read both the commentary and the details of this report in order to understand not only the limitations of citation statistics but also how better to use them. If we set high standards for the conduct of science, surely we should set equally high standards for assessing its quality.
    Content
    Der vollständige Bericht ist im Internet unter der folgenden Adresse zugänglich: http://www.mathunion.org/fileadmin/IMU/Report/CitationStatistics.pdf. - Vgl. auch den Beitrag: Zitaten-Statistiken. In: Mitteilungen der Deutschen Mathematiker-Vereinigung. 2008, H.3, S.198-203.
  9. Final Report to the ALCTS CCS SAC Subcommittee on Metadata and Subject Analysis (2001) 0.00
    0.0011898974 = product of:
      0.0071393843 = sum of:
        0.0071393843 = weight(_text_:in in 5016) [ClassicSimilarity], result of:
          0.0071393843 = score(doc=5016,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.120230645 = fieldWeight in 5016, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=5016)
      0.16666667 = coord(1/6)
    
    Abstract
    The charge for the SAC Subcommittee on Metadata and Subject Analysis states: Identify and study the major issues surrounding the use of metadata in the subject analysis and classification of digital resources. Provide discussion forums and programs relevant to these issues. Discussion forums should begin by Annual 1998. The continued need for the subcommittee should be reexamined by SAC no later than 2001.
  10. Horridge, M.; Knublauch, H.; Rector, A.; Stevens, R.; Wroe, C.: ¬A practical guide to building OWL ontologies using the Protégé-OWL plugin and CO-ODE Tools (2004) 0.00
    0.0010411602 = product of:
      0.006246961 = sum of:
        0.006246961 = weight(_text_:in in 2057) [ClassicSimilarity], result of:
          0.006246961 = score(doc=2057,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.10520181 = fieldWeight in 2057, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2057)
      0.16666667 = coord(1/6)
    
    Abstract
    This guide introduces the Protégé-OWL plugin for creating OWL ontologies. Chapter 3 gives a brief overview of the OWL ontology language. Chapter 4 focuses an building an OWL-DL ontology and using a Description Logic Reasoner to check the consistency of the ontology and automatically compute the ontology class hierarchy. Chapter 6 describes some OWL constructs such as has Value Restrictions and Enumerated classes, which aren't directly used in the main tutorial. Chapter 7 describes Namespaces, Importing ontologies and various features and utilities of the Protégé-OWL application.
  11. Hildebrand, M.; Ossenbruggen, J. van; Hardman, L.: ¬An analysis of search-based user interaction on the Semantic Web (2007) 0.00
    0.0010411602 = product of:
      0.006246961 = sum of:
        0.006246961 = weight(_text_:in in 59) [ClassicSimilarity], result of:
          0.006246961 = score(doc=59,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.10520181 = fieldWeight in 59, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=59)
      0.16666667 = coord(1/6)
    
    Abstract
    Many Semantic Web applications provide access to their resources through text-based search queries, using explicit semantics to improve the search results. This paper provides an analysis of the current state of the art in semantic search, based on 35 existing systems. We identify different types of semantic search features that are used during query construction, the core search process, the presentation of the search results and user feedback on query and results. For each of these, we consider the functionality that the system provides and how this is made available through the user interface.

Languages

  • d 29
  • e 21

Types