Search (18 results, page 1 of 1)

  • × author_ss:"Greenberg, J."
  1. Greenberg, J.: Understanding metadata and metadata scheme (2005) 0.03
    0.025924496 = product of:
      0.120980985 = sum of:
        0.02018744 = weight(_text_:classification in 5725) [ClassicSimilarity], result of:
          0.02018744 = score(doc=5725,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.21111822 = fieldWeight in 5725, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=5725)
        0.08060611 = product of:
          0.16121222 = sum of:
            0.16121222 = weight(_text_:schemes in 5725) [ClassicSimilarity], result of:
              0.16121222 = score(doc=5725,freq=16.0), product of:
                0.16067243 = queryWeight, product of:
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03002521 = queryNorm
                1.0033596 = fieldWeight in 5725, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5725)
          0.5 = coord(1/2)
        0.02018744 = weight(_text_:classification in 5725) [ClassicSimilarity], result of:
          0.02018744 = score(doc=5725,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.21111822 = fieldWeight in 5725, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=5725)
      0.21428572 = coord(3/14)
    
    Abstract
    Although the development and implementation of metadata schemes over the last decade has been extensive, research examining the sum of these activities is limited. This limitation is likely due to the massive scope of the topic. A framework is needed to study the full extent of, and functionalities supported by, metadata schemes. Metadata schemes developed for information resources are analyzed. To begin, I present a review of the definition of metadata, metadata functions, and several metadata typologies. Next, a conceptualization for metadata schemes is presented. The emphasis is on semantic container-like metadata schemes (data structures). The last part of this paper introduces the MODAL (Metadata Objectives and principles, Domains, and Architectural Layout) framework as an approach for studying metadata schemes. The paper concludes with a brief discussion on value of frameworks for examining metadata schemes, including different types of metadata schemes.
    Source
    Cataloging and classification quarterly. 40(2005) nos.3/4, S.17-36
  2. Greenberg, J.: Intellectual control of visual archives : a comparison between the Art and Architecture Thesaurus and the Library of Congress Thesaurus for Graphic Materials (1993) 0.02
    0.01810185 = product of:
      0.0844753 = sum of:
        0.044100422 = weight(_text_:subject in 546) [ClassicSimilarity], result of:
          0.044100422 = score(doc=546,freq=6.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.41066417 = fieldWeight in 546, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=546)
        0.02018744 = weight(_text_:classification in 546) [ClassicSimilarity], result of:
          0.02018744 = score(doc=546,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.21111822 = fieldWeight in 546, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=546)
        0.02018744 = weight(_text_:classification in 546) [ClassicSimilarity], result of:
          0.02018744 = score(doc=546,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.21111822 = fieldWeight in 546, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=546)
      0.21428572 = coord(3/14)
    
    Abstract
    The following investigation is a comparison between the Art and Architecture Thesaurus (AAT) and the LC Thesaurus for Graphic Materials (LCTGM), two popular sources for providing subject access to visual archives. The analysis begins with a discussion on the nature of visual archives and the employment of archival control theory to graphic materials. The major difference observed is that the AAT is a faceted structure geared towards a specialized audience of art and architecture researchers, while LCTGM is similar to LCSH in structure and aims to service the wide-spread archival community. The conclusion recognizes the need to understand the differences between subject thesauri and subject heading lists, and the pressing need to investigate and understand intellectual control of visual archives in today's automated environment.
    Source
    Cataloging and classification quarterly. 16(1993) no.1, S.85-117
  3. Greenberg, J.; Zhao, X.; Monselise, M.; Grabus, S.; Boone, J.: Knowledge organization systems : a network for AI with helping interdisciplinary vocabulary engineering (2021) 0.02
    0.016459066 = product of:
      0.076808974 = sum of:
        0.029704956 = weight(_text_:subject in 719) [ClassicSimilarity], result of:
          0.029704956 = score(doc=719,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.27661324 = fieldWeight in 719, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0546875 = fieldNorm(doc=719)
        0.023552012 = weight(_text_:classification in 719) [ClassicSimilarity], result of:
          0.023552012 = score(doc=719,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.24630459 = fieldWeight in 719, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0546875 = fieldNorm(doc=719)
        0.023552012 = weight(_text_:classification in 719) [ClassicSimilarity], result of:
          0.023552012 = score(doc=719,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.24630459 = fieldWeight in 719, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0546875 = fieldNorm(doc=719)
      0.21428572 = coord(3/14)
    
    Footnote
    Teil eines Themenheftes: Artificial intelligence (AI) and automated processes for subject sccess
    Source
    Cataloging and classification quarterly. 59(2021) no.8, p.720-739
  4. Greenberg, J.: Subject control of ephemera : MARC format options (1996) 0.01
    0.013514834 = product of:
      0.09460384 = sum of:
        0.059409913 = weight(_text_:subject in 543) [ClassicSimilarity], result of:
          0.059409913 = score(doc=543,freq=8.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.5532265 = fieldWeight in 543, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0546875 = fieldNorm(doc=543)
        0.035193928 = weight(_text_:bibliographic in 543) [ClassicSimilarity], result of:
          0.035193928 = score(doc=543,freq=2.0), product of:
            0.11688946 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03002521 = queryNorm
            0.30108726 = fieldWeight in 543, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0546875 = fieldNorm(doc=543)
      0.14285715 = coord(2/14)
    
    Abstract
    Provides an overview of the MARC format and the structure of the bibliographic MARC record. Discusses the MARC-AMC and MARC-VM formats as options for controlling ephemera, lists popular controlled vocabulary tools for subject control over ephemera material and examines subject analysis methodologies. Considers the specific MARC field options for the subject control of ephemera and provides 3 worked examples. Concludes that, while it can be argued that the MARC format does not provide an ideal control system for ephemera, it does offer an excellent means of controlling ephemera in the online environment and permits ephemera to be intellectually linked with related materials of all formats
  5. Greenberg, J.; Mayer-Patel, K.; Trujillo, S.: YouTube: applying FRBR and exploring the multiple description coding compression model (2012) 0.01
    0.012596626 = product of:
      0.058784254 = sum of:
        0.016822865 = weight(_text_:classification in 1930) [ClassicSimilarity], result of:
          0.016822865 = score(doc=1930,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.17593184 = fieldWeight in 1930, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1930)
        0.02513852 = weight(_text_:bibliographic in 1930) [ClassicSimilarity], result of:
          0.02513852 = score(doc=1930,freq=2.0), product of:
            0.11688946 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03002521 = queryNorm
            0.21506234 = fieldWeight in 1930, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1930)
        0.016822865 = weight(_text_:classification in 1930) [ClassicSimilarity], result of:
          0.016822865 = score(doc=1930,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.17593184 = fieldWeight in 1930, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1930)
      0.21428572 = coord(3/14)
    
    Abstract
    Nearly everyone who has searched YouTube for a favorite show, movie, news cast, or other known item, has retrieved multiple videos clips (or segments) that appear to duplicate, overlap, and relate. The work presented in this paper considers this challenge and reports on a study examining the applicability of the Functional Requirements for Bibliographic Records (FRBR) for relating varying renderings of YouTube videos. The paper also introduces the Multiple Description Coding Compression (MDC2) to extend FRBR and address YouTube preservation/storage challenges. The study sample included 20 video segments from YouTube; 10 connected with the event, Small Step for Man (US Astronaut Neil Armstrong's first step on the moon), and 10 with the 1966 classic movie, "Batman: The Movie." The FRBR analysis used a qualitative content analysis, and the MDC2 exploration was pursued via high-level approach of protocol modeling. Results indicate that FRBR is applicable to YouTube, although the analyses required a localization of the Work, Expression, Manifestation, and Item (WEMI) FRBR elements. The MDC2 exploration illustrates an approach for exploring FRBR in the context of other models, and identifies a potential means for addressing YouTube-related preservation/storage challenges.
    Source
    Cataloging and classification quarterly. 50(2012) no.5/7, S.742-762
  6. Greenberg, J.; Méndez Rodríguez, E.M.: Introduction: toward a more library-like Web via semantic knitting (2006) 0.01
    0.01153568 = product of:
      0.08074976 = sum of:
        0.04037488 = weight(_text_:classification in 224) [ClassicSimilarity], result of:
          0.04037488 = score(doc=224,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.42223644 = fieldWeight in 224, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.09375 = fieldNorm(doc=224)
        0.04037488 = weight(_text_:classification in 224) [ClassicSimilarity], result of:
          0.04037488 = score(doc=224,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.42223644 = fieldWeight in 224, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.09375 = fieldNorm(doc=224)
      0.14285715 = coord(2/14)
    
    Source
    Cataloging and classification quarterly. 43(2006) nos.3/4, S.1-8
  7. Willis, C.; Greenberg, J.; White, H.: Analysis and synthesis of metadata goals for scientific data (2012) 0.01
    0.008343249 = product of:
      0.05840274 = sum of:
        0.050266735 = product of:
          0.10053347 = sum of:
            0.10053347 = weight(_text_:schemes in 367) [ClassicSimilarity], result of:
              0.10053347 = score(doc=367,freq=14.0), product of:
                0.16067243 = queryWeight, product of:
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03002521 = queryNorm
                0.6257046 = fieldWeight in 367, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03125 = fieldNorm(doc=367)
          0.5 = coord(1/2)
        0.008136002 = product of:
          0.016272005 = sum of:
            0.016272005 = weight(_text_:22 in 367) [ClassicSimilarity], result of:
              0.016272005 = score(doc=367,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.15476047 = fieldWeight in 367, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=367)
          0.5 = coord(1/2)
      0.14285715 = coord(2/14)
    
    Abstract
    The proliferation of discipline-specific metadata schemes contributes to artificial barriers that can impede interdisciplinary and transdisciplinary research. The authors considered this problem by examining the domains, objectives, and architectures of nine metadata schemes used to document scientific data in the physical, life, and social sciences. They used a mixed-methods content analysis and Greenberg's () metadata objectives, principles, domains, and architectural layout (MODAL) framework, and derived 22 metadata-related goals from textual content describing each metadata scheme. Relationships are identified between the domains (e.g., scientific discipline and type of data) and the categories of scheme objectives. For each strong correlation (>0.6), a Fisher's exact test for nonparametric data was used to determine significance (p < .05). Significant relationships were found between the domains and objectives of the schemes. Schemes describing observational data are more likely to have "scheme harmonization" (compatibility and interoperability with related schemes) as an objective; schemes with the objective "abstraction" (a conceptual model exists separate from the technical implementation) also have the objective "sufficiency" (the scheme defines a minimal amount of information to meet the needs of the community); and schemes with the objective "data publication" do not have the objective "element refinement." The analysis indicates that many metadata-driven goals expressed by communities are independent of scientific discipline or the type of data, although they are constrained by historical community practices and workflows as well as the technological environment at the time of scheme creation. The analysis reveals 11 fundamental metadata goals for metadata documenting scientific data in support of sharing research data across disciplines and domains. The authors report these results and highlight the need for more metadata-related research, particularly in the context of recent funding agency policy changes.
  8. Greenberg, J.: User comprehension and application of information retrieval thesauri (2004) 0.01
    0.00576784 = product of:
      0.04037488 = sum of:
        0.02018744 = weight(_text_:classification in 5008) [ClassicSimilarity], result of:
          0.02018744 = score(doc=5008,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.21111822 = fieldWeight in 5008, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=5008)
        0.02018744 = weight(_text_:classification in 5008) [ClassicSimilarity], result of:
          0.02018744 = score(doc=5008,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.21111822 = fieldWeight in 5008, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=5008)
      0.14285715 = coord(2/14)
    
    Source
    Cataloging and classification quarterly. 37(2004) nos.3/4, S.xx-xx
  9. Greenberg, J.: Advancing Semantic Web via library functions (2006) 0.01
    0.00576784 = product of:
      0.04037488 = sum of:
        0.02018744 = weight(_text_:classification in 244) [ClassicSimilarity], result of:
          0.02018744 = score(doc=244,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.21111822 = fieldWeight in 244, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=244)
        0.02018744 = weight(_text_:classification in 244) [ClassicSimilarity], result of:
          0.02018744 = score(doc=244,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.21111822 = fieldWeight in 244, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=244)
      0.14285715 = coord(2/14)
    
    Source
    Cataloging and classification quarterly. 43(2006) nos.3/4, S.203-225
  10. Greenberg, J.: Theoretical considerations of lifecycle modeling : an analysis of the Dryad Repository demonstrating automatic metadata propagation, inheritance, and value system adoption (2009) 0.00
    0.004806533 = product of:
      0.03364573 = sum of:
        0.016822865 = weight(_text_:classification in 2990) [ClassicSimilarity], result of:
          0.016822865 = score(doc=2990,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.17593184 = fieldWeight in 2990, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2990)
        0.016822865 = weight(_text_:classification in 2990) [ClassicSimilarity], result of:
          0.016822865 = score(doc=2990,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.17593184 = fieldWeight in 2990, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2990)
      0.14285715 = coord(2/14)
    
    Source
    Cataloging and classification quarterly. 47(2009) nos.3/4, S.xx-xx
  11. Greenberg, J.: ¬A quantitative categorical analysis of metadata elements in image-applicable metadata schemes (2001) 0.00
    0.00237488 = product of:
      0.03324832 = sum of:
        0.03324832 = product of:
          0.06649664 = sum of:
            0.06649664 = weight(_text_:schemes in 6529) [ClassicSimilarity], result of:
              0.06649664 = score(doc=6529,freq=2.0), product of:
                0.16067243 = queryWeight, product of:
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03002521 = queryNorm
                0.41386467 = fieldWeight in 6529, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6529)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
  12. Grabus, S.; Logan, P.M.; Greenberg, J.: Temporal concept drift and alignment : an empirical approach to comparing knowledge organization systems over time (2022) 0.00
    0.0021433241 = product of:
      0.030006537 = sum of:
        0.030006537 = weight(_text_:subject in 1100) [ClassicSimilarity], result of:
          0.030006537 = score(doc=1100,freq=4.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.27942157 = fieldWeight in 1100, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1100)
      0.071428575 = coord(1/14)
    
    Abstract
    This research explores temporal concept drift and temporal alignment in knowledge organization systems (KOS). A comparative analysis is pursued using the 1910 Library of Congress Subject Headings, 2020 FAST Topical, and automatic indexing. The use case involves a sample of 90 nineteenth-century Encyclopedia Britannica entries. The entries were indexed using two approaches: 1) full-text indexing; 2) Named Entity Recognition was performed upon the entries with Stanza, Stanford's NLP toolkit, and entities were automatically indexed with the Helping Interdisciplinary Vocabulary application (HIVE), using both 1910 LCSH and FAST Topical. The analysis focused on three goals: 1) identifying results that were exclusive to the 1910 LCSH output; 2) identifying terms in the exclusive set that have been deprecated from the contemporary LCSH, demonstrating temporal concept drift; and 3) exploring the historical significance of these deprecated terms. Results confirm that historical vocabularies can be used to generate anachronistic subject headings representing conceptual drift across time in KOS and historical resources. A methodological contribution is made demonstrating how to study changes in KOS over time and improve the contextualization historical humanities resources.
  13. Greenberg, J.: Reference structures : stagnation, progress, and future challenges (1997) 0.00
    0.0021217826 = product of:
      0.029704956 = sum of:
        0.029704956 = weight(_text_:subject in 1103) [ClassicSimilarity], result of:
          0.029704956 = score(doc=1103,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.27661324 = fieldWeight in 1103, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1103)
      0.071428575 = coord(1/14)
    
    Abstract
    Assesses the current state of reference structures in OPACs in a framework defined by stagnation, progress, and future challenges. 'Stagnation' referes to the limited and inconsistent reference structure access provided in current OPACs. 'Progress' refers to improved OPAC reference structure access and reference structure possibilities that extend beyond those commonly represented in existing subject autgority control tools. The progress discussion is supported by a look at professional committee work, data modelling ideas, ontological theory, and one area of linguistic research. The discussion ends with a list of 6 areas needing attention if reference structure access is to be improved in the future OPAC environment
  14. Li, K.; Greenberg, J.; Dunic, J.: Data objects and documenting scientific processes : an analysis of data events in biodiversity data papers (2020) 0.00
    0.001780432 = product of:
      0.024926046 = sum of:
        0.024926046 = product of:
          0.04985209 = sum of:
            0.04985209 = weight(_text_:texts in 5615) [ClassicSimilarity], result of:
              0.04985209 = score(doc=5615,freq=2.0), product of:
                0.16460659 = queryWeight, product of:
                  5.4822793 = idf(docFreq=499, maxDocs=44218)
                  0.03002521 = queryNorm
                0.302856 = fieldWeight in 5615, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4822793 = idf(docFreq=499, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5615)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Abstract
    The data paper, an emerging scholarly genre, describes research data sets and is intended to bridge the gap between the publication of research data and scientific articles. Research examining how data papers report data events, such as data transactions and manipulations, is limited. The research reported on in this article addresses this limitation and investigated how data events are inscribed in data papers. A content analysis was conducted examining the full texts of 82 data papers, drawn from the curated list of data papers connected to the Global Biodiversity Information Facility. Data events recorded for each paper were organized into a set of 17 categories. Many of these categories are described together in the same sentence, which indicates the messiness of data events in the laboratory space. The findings challenge the degrees to which data papers are a distinct genre compared to research articles and they describe data-centric research processes in a through way. This article also discusses how our results could inform a better data publication ecosystem in the future.
  15. Greenberg, J.: Metadata and the World Wide Web (2002) 0.00
    0.0015155592 = product of:
      0.021217827 = sum of:
        0.021217827 = weight(_text_:subject in 4264) [ClassicSimilarity], result of:
          0.021217827 = score(doc=4264,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.19758089 = fieldWeight in 4264, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4264)
      0.071428575 = coord(1/14)
    
    Abstract
    Metadata is of paramount importance for persons, organizations, and endeavors of every dimension that are increasingly turning to the World Wide Web (hereafter referred to as the Web) as a chief conduit for accessing and disseminating information. This is evidenced by the development and implementation of metadata schemas supporting projects ranging from restricted corporate intranets, data warehouses, and consumer-oriented electronic commerce enterprises to freely accessible digital libraries, educational initiatives, virtual museums, and other public Web sites. Today's metadata activities are unprecedented because they extend beyond the traditional library environment in an effort to deal with the Web's exponential growth. This article considers metadata in today's Web environment. The article defines metadata, examines the relationship between metadata and cataloging, provides definitions for key metadata vocabulary terms, and explores the topic of metadata generation. Metadata is an extensive and expanding subject that is prevalent in many environments. For practical reasons, this article has elected to concentrate an the information resource domain, which is defined by electronic textual documents, graphical images, archival materials, museum artifacts, and other objects found in both digital and physical information centers (e.g., libraries, museums, record centers, and archives). To show the extent and larger application of metadata, several examples are also drawn from the data warehouse, electronic commerce, open source, and medical communities.
  16. White, H.C.; Carrier, S.; Thompson, A.; Greenberg, J.; Scherle, R.: ¬The Dryad Data Repository : a Singapore framework metadata architecture in a DSpace environment (2008) 0.00
    0.0010170004 = product of:
      0.014238005 = sum of:
        0.014238005 = product of:
          0.02847601 = sum of:
            0.02847601 = weight(_text_:22 in 2592) [ClassicSimilarity], result of:
              0.02847601 = score(doc=2592,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.2708308 = fieldWeight in 2592, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2592)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  17. White, H.; Willis, C.; Greenberg, J.: HIVEing : the effect of a semantic web technology on inter-indexer consistency (2014) 0.00
    7.264289E-4 = product of:
      0.010170003 = sum of:
        0.010170003 = product of:
          0.020340007 = sum of:
            0.020340007 = weight(_text_:22 in 1781) [ClassicSimilarity], result of:
              0.020340007 = score(doc=1781,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.19345059 = fieldWeight in 1781, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1781)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Abstract
    Purpose - The purpose of this paper is to examine the effect of the Helping Interdisciplinary Vocabulary Engineering (HIVE) system on the inter-indexer consistency of information professionals when assigning keywords to a scientific abstract. This study examined first, the inter-indexer consistency of potential HIVE users; second, the impact HIVE had on consistency; and third, challenges associated with using HIVE. Design/methodology/approach - A within-subjects quasi-experimental research design was used for this study. Data were collected using a task-scenario based questionnaire. Analysis was performed on consistency results using Hooper's and Rolling's inter-indexer consistency measures. A series of t-tests was used to judge the significance between consistency measure results. Findings - Results suggest that HIVE improves inter-indexing consistency. Working with HIVE increased consistency rates by 22 percent (Rolling's) and 25 percent (Hooper's) when selecting relevant terms from all vocabularies. A statistically significant difference exists between the assignment of free-text keywords and machine-aided keywords. Issues with homographs, disambiguation, vocabulary choice, and document structure were all identified as potential challenges. Research limitations/implications - Research limitations for this study can be found in the small number of vocabularies used for the study. Future research will include implementing HIVE into the Dryad Repository and studying its application in a repository system. Originality/value - This paper showcases several features used in HIVE system. By using traditional consistency measures to evaluate a semantic web technology, this paper emphasizes the link between traditional indexing and next generation machine-aided indexing (MAI) tools.
  18. Shoffner, M.; Greenberg, J.; Kramer-Duffield, J.; Woodbury, D.: Web 2.0 semantic systems : collaborative learning in science (2008) 0.00
    5.811431E-4 = product of:
      0.008136002 = sum of:
        0.008136002 = product of:
          0.016272005 = sum of:
            0.016272005 = weight(_text_:22 in 2661) [ClassicSimilarity], result of:
              0.016272005 = score(doc=2661,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.15476047 = fieldWeight in 2661, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2661)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas