Search (13 results, page 1 of 1)

  • × author_ss:"Taniguchi, S."
  1. Taniguchi, S.: ¬A system for analyzing cataloguing rules : a feasibility study (1996) 0.01
    0.013979755 = product of:
      0.06989878 = sum of:
        0.06989878 = weight(_text_:system in 4198) [ClassicSimilarity], result of:
          0.06989878 = score(doc=4198,freq=18.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.52196956 = fieldWeight in 4198, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4198)
      0.2 = coord(1/5)
    
    Abstract
    The quality control of cataloging standards is as important as the quality control of bibliographic records. In order to aid the quality control of cataloging standards, a prototype system to analyze the ambiguity and complexity of cataloging rules was developed. Before developing the system, a standard rule unit was defined and a simple, function-like format was devised to indicate the syntactic structure of each unit rule. The AACR2 chapter 1 rules were then manually transformed into this function-like, unit role format. The systems reads the manually transformed unit rules and puts them into their basic forms based on their syntactic components. The system then applies rule-templates, which are skeletal schemata for specific types of cataloging rules, to the converted rules. As a result of this rule-template application, the internal structure of each unit rule is determined. The system is also used to explore inter-rule relationships. That is, the system determines whether two rules have an exclusive, parallel, complementary, or non-relationship. These relationships are based on the analysis of the structural parts described above in terms of the given rule-template. To assists in this process, the system applies external knowledge represented in the same fashion as the rule units themselves. Although the prototype system can handle only a restricted range of rules, the proposed approach is positively validated and shown to be useful. However, it is possibly impractical to build a complete rule-analyzing system of this type at this stage
  2. Taniguchi, S.: ¬A system for supporting evidence recording in bibliographic records (2006) 0.01
    0.010419896 = product of:
      0.052099477 = sum of:
        0.052099477 = weight(_text_:system in 282) [ClassicSimilarity], result of:
          0.052099477 = score(doc=282,freq=10.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.38905317 = fieldWeight in 282, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=282)
      0.2 = coord(1/5)
    
    Abstract
    Recording evidence for data values, in addition to the values themselves, in bibliographic records and descriptive metadata has been proposed in a previous study. Recorded evidence indicates why and how data values are recorded for elements. As a continuation of that study, this article first proposes a scenario in which a cataloger and a system interact with each other in recording evidence in bibliographic records for books, with the aim of minimizing costs and effort in recording evidence. Second, it reports on prototype system development in accordance with the scenario. The system (1) searches a string, corresponding to the data value entered by a cataloger or extracted from the Machine Readable Cataloging (MARC) record, within the scanned and optical character recognition (OCR)-converted title page and verso of the title page of an item being cataloged; (2) identifies the place where the string appears within the source of information; (3) identifies the procedure being used to form the value entered or recorded; and finally (4) displays the place and procedure identified for the data value as its candidate evidence. Third, this study reports on an experiment conducted to examine the system's performance. The results of the experiment show the usefulness of the system and the validity of the proposed scenario.
  3. Taniguchi, S.: ¬A system for supporting evidence recording in bibliographic records : Part II: what Is valuable evidence for catalogers? (2007) 0.01
    0.009319837 = product of:
      0.046599183 = sum of:
        0.046599183 = weight(_text_:system in 325) [ClassicSimilarity], result of:
          0.046599183 = score(doc=325,freq=8.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.3479797 = fieldWeight in 325, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=325)
      0.2 = coord(1/5)
    
    Abstract
    Recording evidence for data element values, in addition to the values themselves, in bibliographic records and descriptive metadata is likely to be useful for improving the expressivity and reliability of such records and metadata. Recorded evidence indicates why and how data values are recorded for elements. This article is Part II of a study to explore a way of assisting catalogers in recording evidence in bibliographic records, with the aim of minimizing the costs and effort of doing so. This article begins with a scenario for utilizing recorded evidence to which a cataloger refers for information and understanding of the ways that have been adopted to record data value(s) in a given element. In line with that scenario, the proper content of evidence to be recorded Is first discussed. Second, the functionality of the system developed in Part I is extended and refined to make the system more useful and effective in recording such evidence. Third, the system's performance is experimentally examined, the results of which show its usefulness. And fourth, another system is developed for catalogers to retrieve and display recorded evidence together with bibliographic records in a flexible way.
  4. Taniguchi, S.: Expression-level bibliographic entity records : a trial on creation from pre-existing MARC records (2004) 0.01
    0.006523886 = product of:
      0.03261943 = sum of:
        0.03261943 = weight(_text_:system in 5658) [ClassicSimilarity], result of:
          0.03261943 = score(doc=5658,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.2435858 = fieldWeight in 5658, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5658)
      0.2 = coord(1/5)
    
    Abstract
    This paper reports on a study to investigate the feasibility of creating bibliographic records in accordance with a model giving primacy to expression-level entity, through attempts on converting existing MARC records. First, methods of creating records were examined in terms of the structure of records. A method that explicitly shows the structure of the model on which records were based was then selected. Secondly, a trial was conducted to convert USMARC bibliographic records into those structured according to the method selected, by developing programs to facilitate conversion. Thirdly, a prototype system to use the structured records was developed in order to demonstrate the usefulness of such records.
  5. Tokita, T.; Koto, M.; Miyata, Y.; Yokoyama, Y.; Taniguchi, S.; Ueda, S.: Identifying works for Japanese classics toward construction of FRBRized OPACs (2012) 0.01
    0.006523886 = product of:
      0.03261943 = sum of:
        0.03261943 = weight(_text_:system in 1925) [ClassicSimilarity], result of:
          0.03261943 = score(doc=1925,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.2435858 = fieldWeight in 1925, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1925)
      0.2 = coord(1/5)
    
    Abstract
    A research project was conducted in which proper JAPAN/MARC bibliographic records for 158 major Japanese classical works were identified manually, since existing records contain little information about works included in the resources. This paper reports the detailed method used for work identification, including selecting works, obtaining the bibliographic records to be judged, and building the judgment criteria. The results of the work identification process are reported along with average numbers that indicate the characteristics of certain classics. The necessity of manual identification was justified through an evaluation of searches by author and/or title information in a conventional retrieval system.
  6. Taniguchi, S.: Is BIBFRAME 2.0 a suitable schema for exchanging and sharing diverse descriptive metadata about bibliographic resources? (2018) 0.01
    0.006523886 = product of:
      0.03261943 = sum of:
        0.03261943 = weight(_text_:system in 5165) [ClassicSimilarity], result of:
          0.03261943 = score(doc=5165,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.2435858 = fieldWeight in 5165, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5165)
      0.2 = coord(1/5)
    
    Abstract
    Knowledge organization systems have been studied in several fields and for different and complementary aspects. Among the aspects that concentrate common interests, in this article we highlight those related to the terminological and conceptual relationships among the components of any knowledge organization system. This research aims to contribute to the critical analysis of knowledge organization systems, especially ontologies, thesauri, and classification systems, by the comprehension of its similarities and differences when dealing with concepts and their ways of relating to each other as well as to the conceptual design that is adopted.
  7. Taniguchi, S.: Design of cataloging rules using conceptual modeling of cataloging process (2004) 0.00
    0.0046599186 = product of:
      0.023299592 = sum of:
        0.023299592 = weight(_text_:system in 2264) [ClassicSimilarity], result of:
          0.023299592 = score(doc=2264,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.17398985 = fieldWeight in 2264, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2264)
      0.2 = coord(1/5)
    
    Abstract
    This article proposes a method to design cataloging rules by utilizing conceptual modeling of the cataloging process and also by applying the concept "orientedness." It also proposes a general model for the cataloging process at the conceptual level, which is independent of any situation/system or cataloging code. A design method is made up of the following phases, including the development of a general model. Functional and non-functional requirements are first specified by use of orientedness. Also, cataloger tasks are defined, which are constituents of the cataloging process. Second, a core model is built, which consists of (1) basic event patterns under each task, (2) action patterns applicable to each event, and (3) orientedness involved in an event-action pair. Third, the core model is propagated to reflect the characteristics of an individual data element and also a certain class of materials. Finally, the propagated model is defined by choosing pairs of event and action patterns in the model white referring to orientedness indicated in each event-action pair, in order to match a particular situation. As a result, a set of event-action pairs reflecting specific requirements through categories of orientedness is obtained, and consistent and scalable design can, therefore, be attained.
  8. Taniguchi, S.: Understanding RDA as a DC application profile (2013) 0.00
    0.0031002287 = product of:
      0.015501143 = sum of:
        0.015501143 = product of:
          0.04650343 = sum of:
            0.04650343 = weight(_text_:29 in 1906) [ClassicSimilarity], result of:
              0.04650343 = score(doc=1906,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.31092256 = fieldWeight in 1906, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1906)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    29. 5.2015 19:05:09
  9. Taniguchi, S.: Current status of cataloging and classification education in Japan (2005) 0.00
    0.0027127003 = product of:
      0.013563501 = sum of:
        0.013563501 = product of:
          0.0406905 = sum of:
            0.0406905 = weight(_text_:29 in 5752) [ClassicSimilarity], result of:
              0.0406905 = score(doc=5752,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.27205724 = fieldWeight in 5752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5752)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    29. 9.2008 19:04:28
  10. Taniguchi, S.: Viewing RDA from FRBR and FRAD : does RDA represent a different conceptual model? (2012) 0.00
    0.0027127003 = product of:
      0.013563501 = sum of:
        0.013563501 = product of:
          0.0406905 = sum of:
            0.0406905 = weight(_text_:29 in 1936) [ClassicSimilarity], result of:
              0.0406905 = score(doc=1936,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.27205724 = fieldWeight in 1936, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1936)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    29. 5.2015 11:12:56
  11. Taniguchi, S.: Aggregate and component entities in RDA : model and description (2013) 0.00
    0.0027127003 = product of:
      0.013563501 = sum of:
        0.013563501 = product of:
          0.0406905 = sum of:
            0.0406905 = weight(_text_:29 in 1951) [ClassicSimilarity], result of:
              0.0406905 = score(doc=1951,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.27205724 = fieldWeight in 1951, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1951)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    29. 5.2015 19:02:49
  12. Taniguchi, S.: User tasks in the RDA-based model (2013) 0.00
    0.0027127003 = product of:
      0.013563501 = sum of:
        0.013563501 = product of:
          0.0406905 = sum of:
            0.0406905 = weight(_text_:29 in 1956) [ClassicSimilarity], result of:
              0.0406905 = score(doc=1956,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.27205724 = fieldWeight in 1956, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1956)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    29. 5.2015 19:14:26
  13. Taniguchi, S.: Recording evidence in bibliographic records and descriptive metadata (2005) 0.00
    0.0023042548 = product of:
      0.011521274 = sum of:
        0.011521274 = product of:
          0.03456382 = sum of:
            0.03456382 = weight(_text_:22 in 3565) [ClassicSimilarity], result of:
              0.03456382 = score(doc=3565,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.23214069 = fieldWeight in 3565, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3565)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    18. 6.2005 13:16:22