Search (1958 results, page 1 of 98)

  • × language_ss:"e"
  • × type_ss:"a"
  1. Keith, C.: Using XSLT to manipulate MARC metadata (2004) 0.21
    0.20865986 = product of:
      0.41731972 = sum of:
        0.41731972 = sum of:
          0.37745494 = weight(_text_:toolkit in 4747) [ClassicSimilarity], result of:
            0.37745494 = score(doc=4747,freq=8.0), product of:
              0.3736465 = queryWeight, product of:
                7.61935 = idf(docFreq=58, maxDocs=44218)
                0.049039155 = queryNorm
              1.0101926 = fieldWeight in 4747, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                7.61935 = idf(docFreq=58, maxDocs=44218)
                0.046875 = fieldNorm(doc=4747)
          0.03986477 = weight(_text_:22 in 4747) [ClassicSimilarity], result of:
            0.03986477 = score(doc=4747,freq=2.0), product of:
              0.17172676 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049039155 = queryNorm
              0.23214069 = fieldWeight in 4747, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4747)
      0.5 = coord(1/2)
    
    Abstract
    This paper describes the MARCXML architecture implemented at the Library of Congress. It gives an overview of the component pieces of the architecture, including the MARCXML schema and the MARCXML toolkit, while giving a brief tutorial on their use. Several different applications of the architecture and tools are discussed to illustrate the features of the toolkit being developed thus far. Nearly any metadata format can take advantage of the features of the toolkit, and the process of the toolkit enabling a new format is discussed. Finally, this paper intends to foster new ideas with regards to the transformation of descriptive metadata, especially using XML tools. In this paper the following conventions will be used: MARC21 will refer to MARC 21 records in the ISO 2709 record structure used today; MARCXML will refer to MARC 21 records in an XML structure.
    Source
    Library hi tech. 22(2004) no.2, S.122-130
  2. McMahon, T.E.: Procite 4: a look at the latest release in bibliographic management software (1998) 0.11
    0.11429612 = product of:
      0.22859225 = sum of:
        0.22859225 = sum of:
          0.18872747 = weight(_text_:toolkit in 2810) [ClassicSimilarity], result of:
            0.18872747 = score(doc=2810,freq=2.0), product of:
              0.3736465 = queryWeight, product of:
                7.61935 = idf(docFreq=58, maxDocs=44218)
                0.049039155 = queryNorm
              0.5050963 = fieldWeight in 2810, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.61935 = idf(docFreq=58, maxDocs=44218)
                0.046875 = fieldNorm(doc=2810)
          0.03986477 = weight(_text_:22 in 2810) [ClassicSimilarity], result of:
            0.03986477 = score(doc=2810,freq=2.0), product of:
              0.17172676 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049039155 = queryNorm
              0.23214069 = fieldWeight in 2810, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2810)
      0.5 = coord(1/2)
    
    Abstract
    On Nov 26, 1997, Research Information Systems released its newest version of the ProCite bibliographic management software. The most notable change to the programme is the retooling for compatibility with Windows 95 and NT. In addition to the Windows 95 upgrade, ProCite added 2 new workforms. These forms allow users to capture information about Web pages and e-mail messages. This latest release builds on the Cite While You Write feature that allows users to link citations in a single manuscript to records in multiple databases. The program simplifies the generation of bibliographies and endnotes while allowing users to create bibliographic databases using 28 distinct workforms. Workforms cover a wide range of materials to patents. While there are a few idiosyncrasies users should be aware of, this product is a solid addition to the librarian's toolkit and should be considered by those libraries that have a need for a small but powerful programme to catalogue resources and create bibliographies
    Date
    6. 3.1997 16:22:15
  3. Pennell, B.: ¬The ODA consortium toolkit (1994) 0.11
    0.108961865 = product of:
      0.21792373 = sum of:
        0.21792373 = product of:
          0.43584746 = sum of:
            0.43584746 = weight(_text_:toolkit in 2228) [ClassicSimilarity], result of:
              0.43584746 = score(doc=2228,freq=6.0), product of:
                0.3736465 = queryWeight, product of:
                  7.61935 = idf(docFreq=58, maxDocs=44218)
                  0.049039155 = queryNorm
                1.16647 = fieldWeight in 2228, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  7.61935 = idf(docFreq=58, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2228)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Open Document Architecture (ODA) standard provides a basis for transferring document of any kind to another system, along with the attributes to support further processing. The ODA Consortium has developed a software toolkit enabling themselves and other developers to make ODA aware products. Describes what the Toolkit does and its components and discusses its benefits and availability
  4. Information-filtering software for the Internet from Verity (1996) 0.11
    0.108961865 = product of:
      0.21792373 = sum of:
        0.21792373 = product of:
          0.43584746 = sum of:
            0.43584746 = weight(_text_:toolkit in 4980) [ClassicSimilarity], result of:
              0.43584746 = score(doc=4980,freq=6.0), product of:
                0.3736465 = queryWeight, product of:
                  7.61935 = idf(docFreq=58, maxDocs=44218)
                  0.049039155 = queryNorm
                1.16647 = fieldWeight in 4980, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  7.61935 = idf(docFreq=58, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4980)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Describes Verity's AGENT SERVER TOOLKIT 1.0, a product for filtering information on the Internet. Documents may be filtered against personal profiles in real time. Sources may be chosen from WWW sites, e-mail or paging devices. The TOOLKIT can be used with all common WWW browsers and servers including those from Netscape, Microsoft and NCSA
    Object
    AGENT SERVER TOOLKIT
  5. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.10
    0.09781957 = sum of:
      0.07788718 = product of:
        0.23366153 = sum of:
          0.23366153 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.23366153 = score(doc=562,freq=2.0), product of:
              0.4157545 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.049039155 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.019932386 = product of:
        0.03986477 = sum of:
          0.03986477 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.03986477 = score(doc=562,freq=2.0), product of:
              0.17172676 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049039155 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  6. Leroy, G.; Chen, H.: Genescene: an ontology-enhanced integration of linguistic and co-occurrence based relations in biomedical texts (2005) 0.10
    0.09524676 = product of:
      0.19049352 = sum of:
        0.19049352 = sum of:
          0.15727289 = weight(_text_:toolkit in 5259) [ClassicSimilarity], result of:
            0.15727289 = score(doc=5259,freq=2.0), product of:
              0.3736465 = queryWeight, product of:
                7.61935 = idf(docFreq=58, maxDocs=44218)
                0.049039155 = queryNorm
              0.42091358 = fieldWeight in 5259, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.61935 = idf(docFreq=58, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5259)
          0.03322064 = weight(_text_:22 in 5259) [ClassicSimilarity], result of:
            0.03322064 = score(doc=5259,freq=2.0), product of:
              0.17172676 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049039155 = queryNorm
              0.19345059 = fieldWeight in 5259, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5259)
      0.5 = coord(1/2)
    
    Abstract
    The increasing amount of publicly available literature and experimental data in biomedicine makes it hard for biomedical researchers to stay up-to-date. Genescene is a toolkit that will help alleviate this problem by providing an overview of published literature content. We combined a linguistic parser with Concept Space, a co-occurrence based semantic net. Both techniques extract complementary biomedical relations between noun phrases from MEDLINE abstracts. The parser extracts precise and semantically rich relations from individual abstracts. Concept Space extracts relations that hold true for the collection of abstracts. The Gene Ontology, the Human Genome Nomenclature, and the Unified Medical Language System, are also integrated in Genescene. Currently, they are used to facilitate the integration of the two relation types, and to select the more interesting and high-quality relations for presentation. A user study focusing on p53 literature is discussed. All MEDLINE abstracts discussing p53 were processed in Genescene. Two researchers evaluated the terms and relations from several abstracts of interest to them. The results show that the terms were precise (precision 93%) and relevant, as were the parser relations (precision 95%). The Concept Space relations were more precise when selected with ontological knowledge (precision 78%) than without (60%).
    Date
    22. 7.2006 14:26:01
  7. Dunsire, G.; Nicholson, D.: Signposting the crossroads : terminology Web services and classification-based interoperability (2010) 0.10
    0.09524676 = product of:
      0.19049352 = sum of:
        0.19049352 = sum of:
          0.15727289 = weight(_text_:toolkit in 4066) [ClassicSimilarity], result of:
            0.15727289 = score(doc=4066,freq=2.0), product of:
              0.3736465 = queryWeight, product of:
                7.61935 = idf(docFreq=58, maxDocs=44218)
                0.049039155 = queryNorm
              0.42091358 = fieldWeight in 4066, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.61935 = idf(docFreq=58, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4066)
          0.03322064 = weight(_text_:22 in 4066) [ClassicSimilarity], result of:
            0.03322064 = score(doc=4066,freq=2.0), product of:
              0.17172676 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049039155 = queryNorm
              0.19345059 = fieldWeight in 4066, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4066)
      0.5 = coord(1/2)
    
    Abstract
    The focus of this paper is the provision of terminology- and classification-based terminologies interoperability data via web services, initially using interoperability data based on the use of a Dewey Decimal Classification (DDC) spine, but with an aim to explore other possibilities in time, including the use of other spines. The High-Level Thesaurus Project (HILT) Phase IV developed pilot web services based on SRW/U, SOAP, and SKOS to deliver machine-readable terminology and crossterminology mappings data likely to be useful to information services wishing to enhance their subject search or browse services. It also developed an associated toolkit to help information services technical staff to embed HILT-related functionality within service interfaces. Several UK information services have created illustrative user interface enhancements using HILT functionality and these will demonstrate what is possible. HILT currently has the following subject schemes mounted and available: DDC, CAB, GCMD, HASSET, IPSV, LCSH, MeSH, NMR, SCAS, UNESCO, and AAT. It also has high level mappings between some of these schemes and DDC and some deeper pilot mappings available.
    Date
    6. 1.2011 19:22:48
  8. Shafer, K.E.: Mantis Project : A Toolkit for Cataloging (2001) 0.09
    0.094363734 = product of:
      0.18872747 = sum of:
        0.18872747 = product of:
          0.37745494 = sum of:
            0.37745494 = weight(_text_:toolkit in 1028) [ClassicSimilarity], result of:
              0.37745494 = score(doc=1028,freq=2.0), product of:
                0.3736465 = queryWeight, product of:
                  7.61935 = idf(docFreq=58, maxDocs=44218)
                  0.049039155 = queryNorm
                1.0101926 = fieldWeight in 1028, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.61935 = idf(docFreq=58, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1028)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  9. Godby, C.J.; Reighart, R.R.: ¬The WordSmith Toolkit (2001) 0.09
    0.094363734 = product of:
      0.18872747 = sum of:
        0.18872747 = product of:
          0.37745494 = sum of:
            0.37745494 = weight(_text_:toolkit in 1055) [ClassicSimilarity], result of:
              0.37745494 = score(doc=1055,freq=2.0), product of:
                0.3736465 = queryWeight, product of:
                  7.61935 = idf(docFreq=58, maxDocs=44218)
                  0.049039155 = queryNorm
                1.0101926 = fieldWeight in 1055, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.61935 = idf(docFreq=58, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1055)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  10. Delfino, E.: ¬The Internet toolkit : file compression and archive utilities (1993) 0.08
    0.078636445 = product of:
      0.15727289 = sum of:
        0.15727289 = product of:
          0.31454578 = sum of:
            0.31454578 = weight(_text_:toolkit in 6718) [ClassicSimilarity], result of:
              0.31454578 = score(doc=6718,freq=2.0), product of:
                0.3736465 = queryWeight, product of:
                  7.61935 = idf(docFreq=58, maxDocs=44218)
                  0.049039155 = queryNorm
                0.84182715 = fieldWeight in 6718, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.61935 = idf(docFreq=58, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6718)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  11. Wackerow, J.: ¬The Data Documentation Initiative (DDI) (2008) 0.07
    0.066672735 = product of:
      0.13334547 = sum of:
        0.13334547 = sum of:
          0.110091016 = weight(_text_:toolkit in 2662) [ClassicSimilarity], result of:
            0.110091016 = score(doc=2662,freq=2.0), product of:
              0.3736465 = queryWeight, product of:
                7.61935 = idf(docFreq=58, maxDocs=44218)
                0.049039155 = queryNorm
              0.2946395 = fieldWeight in 2662, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.61935 = idf(docFreq=58, maxDocs=44218)
                0.02734375 = fieldNorm(doc=2662)
          0.023254449 = weight(_text_:22 in 2662) [ClassicSimilarity], result of:
            0.023254449 = score(doc=2662,freq=2.0), product of:
              0.17172676 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049039155 = queryNorm
              0.1354154 = fieldWeight in 2662, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=2662)
      0.5 = coord(1/2)
    
    Abstract
    The Data Documentation Initiative (DDI) is an international effort to establish an XML-based standard for the compilation, presentation, and exchange of documentation for datasets in the social and behavioral sciences. The most recent version 3.0 of the DDI supports a rich and structured set of metadata elements that not only fully informs a potential data analyst about a given dataset but also facilitates computer processing of the data. Moreover, data producers will find that by adopting the DDI standard they can produce better and more complete documentation as a natural step in designing and fielding computer-assisted interviewing. DDI 3.0 embraces the full life cycle of the data from conception, through development of the data collection instrument, collection and cleaning of data, production of data products, distribution, preservation, and reuse or analysis of the data. DDI 3.0 is designed to facilitate sharing schemes for concepts, questions, coding, and variables within organizations or throughout the social science research community. Comparison through direct inheritance as in the case of comparisonby- design or through the mapping of items like variables or categories allow capture of the harmonization processes used in creating integrated files in an uniform and machine-actionable way. DDI 3.0 is providing the structural support needed to facilitate comparative survey work in a way that was previously unavailable in an open, non-proprietary system. A specific DDI module allows for the capture and expression of native Dublin Core elements (DCMES), used either as references or as descriptions of a particular set of metadata. This module uses the simple Dublin Core namespace represented as XML Schema following the guidelines for implementing Dublin Core in XML. In DDI, the Dublin Core is not used as the primary citation mechanism - this module is included to support applications which understand the Dublin Core XML, but which do not understand DDI. This module is used wherever citations are permitted within DDI 3.0 (like citations of a study description or of other material). DDI 3.0 is aligned with other metadata standards as well: with SDMX (time-series data) for exchanging aggregate data, with ISO/IEC 11179 (metadata registry) for building data registries such as question, variable, and concept banks, and with FGDC and ISO 19115 (geographic standards) for supporting GIS users. DDI 3.0 is described in a conceptual model which is also expressed in the Universal Modeling Language (UML). Modular XML Schemas are derived from the conceptual model. Many elements support computer processing - that is, it will go beyond being "human readable", and move toward the goal of being "machine-actionable". The final release of DDI 3.0 has been published on April 28th 2008. The standard was developed by the DDI Alliance, an international group encompassing data archives and research institutions from several countries in Western Europe and North America. Earlier versions of DDI provide examples of institutions and applications: the Inter-university Consortium for Political and Social Research (ICPSR) Data Catalog, the Council of European Social Science Data Services (CESSDA) Data Portal, the Dataverse Network, the International Household Survey Network (IHSN), NESSTAR Software for publishing data on the Web and online analysis, and the Microdata Management Toolkit (by the World Bank Data Group for IHSN).
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  12. King, R.; Novak, M.: Designing database interface with DBFace (1993) 0.06
    0.062909156 = product of:
      0.12581831 = sum of:
        0.12581831 = product of:
          0.25163662 = sum of:
            0.25163662 = weight(_text_:toolkit in 6269) [ClassicSimilarity], result of:
              0.25163662 = score(doc=6269,freq=2.0), product of:
                0.3736465 = queryWeight, product of:
                  7.61935 = idf(docFreq=58, maxDocs=44218)
                  0.049039155 = queryNorm
                0.67346174 = fieldWeight in 6269, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.61935 = idf(docFreq=58, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6269)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    DBFace is a toolkit for designing custom interfaces to object-oriented databases with minimal programming, allowing users to create graphical structures and interactive techniques. Outlines its unique features and provides background on User Interface Management Systems (UIMS) to show how DBFace differs from the UIMS available and why the differences are important. Describes the architecture of DBFace, functionality, provides a network example and outlines future directions
  13. Turner, F.: Z39.50 and information retrieval toolkit software (1994) 0.06
    0.062909156 = product of:
      0.12581831 = sum of:
        0.12581831 = product of:
          0.25163662 = sum of:
            0.25163662 = weight(_text_:toolkit in 948) [ClassicSimilarity], result of:
              0.25163662 = score(doc=948,freq=2.0), product of:
                0.3736465 = queryWeight, product of:
                  7.61935 = idf(docFreq=58, maxDocs=44218)
                  0.049039155 = queryNorm
                0.67346174 = fieldWeight in 948, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.61935 = idf(docFreq=58, maxDocs=44218)
                  0.0625 = fieldNorm(doc=948)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  14. Cullen, C.: Verity agent technology : automatic filtering, matching and dissemination of information (1996) 0.06
    0.062909156 = product of:
      0.12581831 = sum of:
        0.12581831 = product of:
          0.25163662 = sum of:
            0.25163662 = weight(_text_:toolkit in 2415) [ClassicSimilarity], result of:
              0.25163662 = score(doc=2415,freq=2.0), product of:
                0.3736465 = queryWeight, product of:
                  7.61935 = idf(docFreq=58, maxDocs=44218)
                  0.049039155 = queryNorm
                0.67346174 = fieldWeight in 2415, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.61935 = idf(docFreq=58, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2415)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Describes the use of intelligent agents to filter and categorise information on the Web. Describes SEARCH'97 Agents from Verity UK. Outlines agent capabilities of filtering, categorising, gathering, defining, routing, launching, editing, using bingo card agent forms, using agents automatically, the delivery option, and mode, and delivery content and schedule. Describes 3 case studies of applications created using Verity's Agent Server Toolkit in partnership with Time Inc. New Media, Knight-Ridder New Media, and Xilinx Industry Guide
  15. Kuhagen, J.: RDA content in multiple languages : a new standard not only for libraries (2016) 0.06
    0.062909156 = product of:
      0.12581831 = sum of:
        0.12581831 = product of:
          0.25163662 = sum of:
            0.25163662 = weight(_text_:toolkit in 2955) [ClassicSimilarity], result of:
              0.25163662 = score(doc=2955,freq=2.0), product of:
                0.3736465 = queryWeight, product of:
                  7.61935 = idf(docFreq=58, maxDocs=44218)
                  0.049039155 = queryNorm
                0.67346174 = fieldWeight in 2955, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.61935 = idf(docFreq=58, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2955)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A summary of the presence of RDA content in languages other than English in RDA Toolkit, in the RDA Registry, in the RIMMF data editor, and as separate translations is given. Translation policy is explained and the benefits of translation on the content of RDA are noted.
  16. Suominen, O.; Koskenniemi, I.: Annif Analyzer Shootout : comparing text lemmatization methods for automated subject indexing (2022) 0.06
    0.05560436 = product of:
      0.11120872 = sum of:
        0.11120872 = product of:
          0.22241744 = sum of:
            0.22241744 = weight(_text_:toolkit in 658) [ClassicSimilarity], result of:
              0.22241744 = score(doc=658,freq=4.0), product of:
                0.3736465 = queryWeight, product of:
                  7.61935 = idf(docFreq=58, maxDocs=44218)
                  0.049039155 = queryNorm
                0.5952617 = fieldWeight in 658, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  7.61935 = idf(docFreq=58, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=658)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Automated text classification is an important function for many AI systems relevant to libraries, including automated subject indexing and classification. When implemented using the traditional natural language processing (NLP) paradigm, one key part of the process is the normalization of words using stemming or lemmatization, which reduces the amount of linguistic variation and often improves the quality of classification. In this paper, we compare the output of seven different text lemmatization algorithms as well as two baseline methods. We measure how the choice of method affects the quality of text classification using example corpora in three languages. The experiments have been performed using the open source Annif toolkit for automated subject indexing and classification, but should generalize also to other NLP toolkits and similar text classification tasks. The results show that lemmatization methods in most cases outperform baseline methods in text classification particularly for Finnish and Swedish text, but not English, where baseline methods are most effective. The differences between lemmatization methods are quite small. The systematic comparison will help optimize text classification pipelines and inform the further development of the Annif toolkit to incorporate a wider choice of normalization methods.
  17. Brown, J.C.; Sadik, A.K.: Cataloguing, indexing, searching and browsing multiple postscript documents (1995) 0.06
    0.055045508 = product of:
      0.110091016 = sum of:
        0.110091016 = product of:
          0.22018203 = sum of:
            0.22018203 = weight(_text_:toolkit in 5794) [ClassicSimilarity], result of:
              0.22018203 = score(doc=5794,freq=2.0), product of:
                0.3736465 = queryWeight, product of:
                  7.61935 = idf(docFreq=58, maxDocs=44218)
                  0.049039155 = queryNorm
                0.589279 = fieldWeight in 5794, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.61935 = idf(docFreq=58, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5794)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Describes the development of a system of automatic cataloguing, indexing and retrieval of PostScript documents and figures generated from Microsoft Windows packages by university departments. Xview provides a GUI toolkit used in the project for building a document reader interface to display document lists, multiple windows containing contents, abstracts, search results, diagrams and Postscript text. Discusses the indexing of Postscript; the indexer; the reader; transfer of data structures between indexer and reader; indexing and content list generation of single documents; scrolling lists and instance highlighting in Postscript; the querying algorithm; and figure display and Postscript canvases
  18. Acedera, A.P.: Are Philippine librarians ready for resource description and access (RDA)? : the Mindanao experience (2014) 0.06
    0.055045508 = product of:
      0.110091016 = sum of:
        0.110091016 = product of:
          0.22018203 = sum of:
            0.22018203 = weight(_text_:toolkit in 1983) [ClassicSimilarity], result of:
              0.22018203 = score(doc=1983,freq=2.0), product of:
                0.3736465 = queryWeight, product of:
                  7.61935 = idf(docFreq=58, maxDocs=44218)
                  0.049039155 = queryNorm
                0.589279 = fieldWeight in 1983, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.61935 = idf(docFreq=58, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1983)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This study aimed to find out the level of readiness of Mindanao librarians to use Resource Description and Access (RDA), which has been prescribed and adopted by the Philippine Professional Regulatory Board for Librarians (PRBFL). The majority of librarians are aware of the PRBFL prescription and adoption. Librarians who received more RDA training and felt that their RDA training was adequate and were more comfortable with the use of RDA as compared with those who received little or no RDA training. An important finding of the study is that most Mindanao libraries do not have access to the RDA Toolkit.
  19. Lisius, P.H.: AACR2 to RDA : is knowledge of both needed during the transition period? (2015) 0.06
    0.055045508 = product of:
      0.110091016 = sum of:
        0.110091016 = product of:
          0.22018203 = sum of:
            0.22018203 = weight(_text_:toolkit in 2008) [ClassicSimilarity], result of:
              0.22018203 = score(doc=2008,freq=2.0), product of:
                0.3736465 = queryWeight, product of:
                  7.61935 = idf(docFreq=58, maxDocs=44218)
                  0.049039155 = queryNorm
                0.589279 = fieldWeight in 2008, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.61935 = idf(docFreq=58, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2008)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The cataloging community is at a crossroads. Will catalogers need to continue learning both Anglo-American Cataloguing Rules , Second Edition (AACR2) and Resource Description and Access (RDA), or will learning RDA alone be enough? Through a selective literature review and examining the RDA Toolkit, it seems that there is currently a collective need to have access to both codes. However, when considering both Library of Congress-Program for Cooperative Cataloging (LC-PCC) and OCLC initiatives and an example from this author's institution relating to authority control in RDA and bibliographic record hybridization, it may only be necessary to learn RDA in the future. Additional research into practitioner experience could be done in the future to further examine this.
  20. Dunsire, G.; Fritz, D.; Fritz, R.: Instructions, interfaces, and interoperable data : the RIMMF experience with RDA revisited (2020) 0.06
    0.055045508 = product of:
      0.110091016 = sum of:
        0.110091016 = product of:
          0.22018203 = sum of:
            0.22018203 = weight(_text_:toolkit in 5751) [ClassicSimilarity], result of:
              0.22018203 = score(doc=5751,freq=2.0), product of:
                0.3736465 = queryWeight, product of:
                  7.61935 = idf(docFreq=58, maxDocs=44218)
                  0.049039155 = queryNorm
                0.589279 = fieldWeight in 5751, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.61935 = idf(docFreq=58, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5751)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article presents a case study of RIMMF, a software tool developed to improve the orientation and training of catalogers who use Resource Description and Access (RDA) to maintain bibliographic data. The cataloging guidance and instructions of RDA are based on the Functional Requirements conceptual models that are now consolidated in the IFLA Library Reference Model, but many catalogers are applying RDA in systems that have evolved from inventory and text-processing applications developed from older metadata paradigms. The article describes how RIMMF interacts with the RDA Toolkit and RDA Registry to offer cataloger-friendly multilingual data input and editing interfaces.

Years

Types

  • el 34
  • b 31
  • p 1
  • More… Less…

Themes