Search (9 results, page 1 of 1)

  • × author_ss:"Taniguchi, S."
  1. Taniguchi, S.: Recording evidence in bibliographic records and descriptive metadata (2005) 0.01
    0.008496759 = product of:
      0.042483795 = sum of:
        0.042483795 = weight(_text_:22 in 3565) [ClassicSimilarity], result of:
          0.042483795 = score(doc=3565,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.23214069 = fieldWeight in 3565, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=3565)
      0.2 = coord(1/5)
    
    Date
    18. 6.2005 13:16:22
  2. Taniguchi, S.: Reevaluation of the 3-layered model in descriptive cataloguing (1997) 0.01
    0.006762158 = product of:
      0.03381079 = sum of:
        0.03381079 = weight(_text_:it in 91) [ClassicSimilarity], result of:
          0.03381079 = score(doc=91,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.22368698 = fieldWeight in 91, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0546875 = fieldNorm(doc=91)
      0.2 = coord(1/5)
    
    Abstract
    Several years ago a conceptual framework was proposed that was designed to capture a bibliographic item by means of a structured approach and to present it in a structured manner in a bibliographic record: the 3-layered approach. Recently IFLA published the report of a study entitled 'Functional requirements for bibliographic records: draft report for worldwide review' for the purpose of a thorough reexamination of the question based on an analysis of user needs. The IFLA report attempted to capture the bibliographic universe through E-R analysis and to define entities, attributes of entities and relationships between them, all of which constitute the bibliographic universe. Compares the 3-layered model and the IFLA model culminating in a reevaluation of the 3-layered model
  3. Taniguchi, S.: Current status of cataloging and classification education in Japan (2005) 0.01
    0.006762158 = product of:
      0.03381079 = sum of:
        0.03381079 = weight(_text_:it in 5752) [ClassicSimilarity], result of:
          0.03381079 = score(doc=5752,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.22368698 = fieldWeight in 5752, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5752)
      0.2 = coord(1/5)
    
    Abstract
    This paper provides an overview of the current status of cataloging and classification (C&C) education in Japan and looks forward to the development in the near future. First, the current status of library and information science (LIS) education and its major issues are briefly reviewed. Second, the situation of C&C practice in Japanese libraries is briefly reviewed, since it affects C&C education. Third, the present situation and issues in C&C education are examined and described under two categories: education in LIS schools and education in LIS programs offered by other colleges and universities. Finally, on-the-job training and continuing education in the C&C domain are discussed.
  4. Taniguchi, S.: Viewing RDA from FRBR and FRAD : does RDA represent a different conceptual model? (2012) 0.01
    0.006762158 = product of:
      0.03381079 = sum of:
        0.03381079 = weight(_text_:it in 1936) [ClassicSimilarity], result of:
          0.03381079 = score(doc=1936,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.22368698 = fieldWeight in 1936, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1936)
      0.2 = coord(1/5)
    
    Abstract
    Resource Description and Access (RDA) was analyzed through a comparison between the Functional Requirements for Bibliographic Records (FRBR) and Functional Requirements for Authority Data (FRAD) models and the model that RDA directly reflects. First, it was clarified that RDA adopts the FRBR entities but with some differences, such as the relationship between work and manifestation and the treatment of "title of the expression." Second, for the FRAD scope, a slightly different model that reflects RDA directly was proposed, incorporating the decomposition of FRAD entities as well as a new entity "description."
  5. Taniguchi, S.: Aggregate and component entities in RDA : model and description (2013) 0.01
    0.006762158 = product of:
      0.03381079 = sum of:
        0.03381079 = weight(_text_:it in 1951) [ClassicSimilarity], result of:
          0.03381079 = score(doc=1951,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.22368698 = fieldWeight in 1951, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1951)
      0.2 = coord(1/5)
    
    Abstract
    Based on the distinction between model and description in Resource Description and Access (RDA), the modeling and description of aggregate and component entities in RDA was examined. Guidelines and instructions related to such modeling were extracted from RDA and reconciled. After introducing additional assumptions, five possible model patterns of aggregate and component entities were developed. Then, the mapping between these model patterns and the manifestation types was clarified, revealing which model patterns are applicable to a given type of manifestation. Finally, RDA instructions on descriptions for aggregates/components were examined, and it was clarified that they do not have any conflict with the modeling.
  6. Taniguchi, S.: Conceptual modeling of component parts of bibliographic resources in cataloging (2003) 0.01
    0.005796136 = product of:
      0.028980678 = sum of:
        0.028980678 = weight(_text_:it in 4442) [ClassicSimilarity], result of:
          0.028980678 = score(doc=4442,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.19173169 = fieldWeight in 4442, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.046875 = fieldNorm(doc=4442)
      0.2 = coord(1/5)
    
    Abstract
    This paper examines differences in modeling component parts of bibliographic resources between two conceptual models in cataloging, as a continuation of the previous study that proposed a model giving primacy to expression-level bibliographic entity. First, the model by IFLA Study Group on Functional Requirements for Bibliographic Records (FRBR) was examined from the viewpoint of modeling component parts when each part in itself is a resource to be described. The examination is done on two types of component parts, a content part and a document part, which are different in terms of whether they are physically independent. This results in different structures for these two component types. Secondly, by applying the viewpoint to the model that the author proposed earlier, it has become clear that both component types can be modeled basically in the same manner, indicating the model's superiority in consistency to the FRBR model in this respect.
  7. Taniguchi, S.: ¬A system for analyzing cataloguing rules : a feasibility study (1996) 0.00
    0.004830113 = product of:
      0.024150565 = sum of:
        0.024150565 = weight(_text_:it in 4198) [ClassicSimilarity], result of:
          0.024150565 = score(doc=4198,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.15977642 = fieldWeight in 4198, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4198)
      0.2 = coord(1/5)
    
    Abstract
    The quality control of cataloging standards is as important as the quality control of bibliographic records. In order to aid the quality control of cataloging standards, a prototype system to analyze the ambiguity and complexity of cataloging rules was developed. Before developing the system, a standard rule unit was defined and a simple, function-like format was devised to indicate the syntactic structure of each unit rule. The AACR2 chapter 1 rules were then manually transformed into this function-like, unit role format. The systems reads the manually transformed unit rules and puts them into their basic forms based on their syntactic components. The system then applies rule-templates, which are skeletal schemata for specific types of cataloging rules, to the converted rules. As a result of this rule-template application, the internal structure of each unit rule is determined. The system is also used to explore inter-rule relationships. That is, the system determines whether two rules have an exclusive, parallel, complementary, or non-relationship. These relationships are based on the analysis of the structural parts described above in terms of the given rule-template. To assists in this process, the system applies external knowledge represented in the same fashion as the rule units themselves. Although the prototype system can handle only a restricted range of rules, the proposed approach is positively validated and shown to be useful. However, it is possibly impractical to build a complete rule-analyzing system of this type at this stage
  8. Taniguchi, S.: Design of cataloging rules using conceptual modeling of cataloging process (2004) 0.00
    0.004830113 = product of:
      0.024150565 = sum of:
        0.024150565 = weight(_text_:it in 2264) [ClassicSimilarity], result of:
          0.024150565 = score(doc=2264,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.15977642 = fieldWeight in 2264, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2264)
      0.2 = coord(1/5)
    
    Abstract
    This article proposes a method to design cataloging rules by utilizing conceptual modeling of the cataloging process and also by applying the concept "orientedness." It also proposes a general model for the cataloging process at the conceptual level, which is independent of any situation/system or cataloging code. A design method is made up of the following phases, including the development of a general model. Functional and non-functional requirements are first specified by use of orientedness. Also, cataloger tasks are defined, which are constituents of the cataloging process. Second, a core model is built, which consists of (1) basic event patterns under each task, (2) action patterns applicable to each event, and (3) orientedness involved in an event-action pair. Third, the core model is propagated to reflect the characteristics of an individual data element and also a certain class of materials. Finally, the propagated model is defined by choosing pairs of event and action patterns in the model white referring to orientedness indicated in each event-action pair, in order to match a particular situation. As a result, a set of event-action pairs reflecting specific requirements through categories of orientedness is obtained, and consistent and scalable design can, therefore, be attained.
  9. Taniguchi, S.: ¬A system for supporting evidence recording in bibliographic records (2006) 0.00
    0.004830113 = product of:
      0.024150565 = sum of:
        0.024150565 = weight(_text_:it in 282) [ClassicSimilarity], result of:
          0.024150565 = score(doc=282,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.15977642 = fieldWeight in 282, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0390625 = fieldNorm(doc=282)
      0.2 = coord(1/5)
    
    Abstract
    Recording evidence for data values, in addition to the values themselves, in bibliographic records and descriptive metadata has been proposed in a previous study. Recorded evidence indicates why and how data values are recorded for elements. As a continuation of that study, this article first proposes a scenario in which a cataloger and a system interact with each other in recording evidence in bibliographic records for books, with the aim of minimizing costs and effort in recording evidence. Second, it reports on prototype system development in accordance with the scenario. The system (1) searches a string, corresponding to the data value entered by a cataloger or extracted from the Machine Readable Cataloging (MARC) record, within the scanned and optical character recognition (OCR)-converted title page and verso of the title page of an item being cataloged; (2) identifies the place where the string appears within the source of information; (3) identifies the procedure being used to form the value entered or recorded; and finally (4) displays the place and procedure identified for the data value as its candidate evidence. Third, this study reports on an experiment conducted to examine the system's performance. The results of the experiment show the usefulness of the system and the validity of the proposed scenario.