Search (10 results, page 1 of 1)

  • × author_ss:"Greenberg, J."
  1. Greenberg, J.: Reference structures : stagnation, progress, and future challenges (1997) 0.04
    0.03637277 = product of:
      0.14549108 = sum of:
        0.10511641 = weight(_text_:supported in 1103) [ClassicSimilarity], result of:
          0.10511641 = score(doc=1103,freq=2.0), product of:
            0.22949564 = queryWeight, product of:
              5.9223356 = idf(docFreq=321, maxDocs=44218)
              0.03875087 = queryNorm
            0.45803228 = fieldWeight in 1103, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.9223356 = idf(docFreq=321, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1103)
        0.04037467 = weight(_text_:work in 1103) [ClassicSimilarity], result of:
          0.04037467 = score(doc=1103,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.28386727 = fieldWeight in 1103, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1103)
      0.25 = coord(2/8)
    
    Abstract
    Assesses the current state of reference structures in OPACs in a framework defined by stagnation, progress, and future challenges. 'Stagnation' referes to the limited and inconsistent reference structure access provided in current OPACs. 'Progress' refers to improved OPAC reference structure access and reference structure possibilities that extend beyond those commonly represented in existing subject autgority control tools. The progress discussion is supported by a look at professional committee work, data modelling ideas, ontological theory, and one area of linguistic research. The discussion ends with a list of 6 areas needing attention if reference structure access is to be improved in the future OPAC environment
  2. Greenberg, J.; Murillo, A.; Ogletree, A.; Boyles, R.; Martin, N.; Romeo, C.: Metadata capital : automating metadata workflows in the NIEHS viral vector core laboratory (2014) 0.03
    0.03117666 = product of:
      0.12470664 = sum of:
        0.09009978 = weight(_text_:supported in 1566) [ClassicSimilarity], result of:
          0.09009978 = score(doc=1566,freq=2.0), product of:
            0.22949564 = queryWeight, product of:
              5.9223356 = idf(docFreq=321, maxDocs=44218)
              0.03875087 = queryNorm
            0.3925991 = fieldWeight in 1566, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.9223356 = idf(docFreq=321, maxDocs=44218)
              0.046875 = fieldNorm(doc=1566)
        0.034606863 = weight(_text_:work in 1566) [ClassicSimilarity], result of:
          0.034606863 = score(doc=1566,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.2433148 = fieldWeight in 1566, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.046875 = fieldNorm(doc=1566)
      0.25 = coord(2/8)
    
    Abstract
    This paper presents research examining metadata capital in the context of the Viral Vector Core Laboratory at the National Institute of Environmental Health Sciences (NIEHS). Methods include collaborative workflow modeling and a metadata analysis. Models of the laboratory's workflow and metadata activity are generated to identify potential opportunities for defining microservices that may be supported by iRODS rules. Generic iRODS rules are also shared along with images of the iRODS prototype. The discussion includes an exploration of a modified capital sigma equation to understand metadata as an asset. The work aims to raise awareness of metadata as an asset and to incentivize investment in metadata R&D.
  3. Greenberg, J.: Optimal query expansion (QE) processing methods with semantically encoded structured thesaurus terminology (2001) 0.01
    0.011262473 = product of:
      0.09009978 = sum of:
        0.09009978 = weight(_text_:supported in 5750) [ClassicSimilarity], result of:
          0.09009978 = score(doc=5750,freq=2.0), product of:
            0.22949564 = queryWeight, product of:
              5.9223356 = idf(docFreq=321, maxDocs=44218)
              0.03875087 = queryNorm
            0.3925991 = fieldWeight in 5750, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.9223356 = idf(docFreq=321, maxDocs=44218)
              0.046875 = fieldNorm(doc=5750)
      0.125 = coord(1/8)
    
    Abstract
    While researchers have explored the value of structured thesauri as controlled vocabularies for general information retrieval (IR) activities, they have not identified the optimal query expansion (QE) processing methods for taking advantage of the semantic encoding underlying the terminology in these tools. The study reported on in this article addresses this question, and examined whether QE via semantically encoded thesauri terminology is more effective in the automatic or interactive processing environment. The research found that, regardless of end-users' retrieval goals, synonyms and partial synonyms (SYNs) and narrower terms (NTs) are generally good candidates for automatic QE and that related (RTs) are better candidates for interactive QE. The study also examined end-users' selection of semantically encoded thesauri terms for interactive QE, and explored how retrieval goals and QE processes may be combined in future thesauri-supported IR systems
  4. Greenberg, J.: Understanding metadata and metadata scheme (2005) 0.01
    0.011262473 = product of:
      0.09009978 = sum of:
        0.09009978 = weight(_text_:supported in 5725) [ClassicSimilarity], result of:
          0.09009978 = score(doc=5725,freq=2.0), product of:
            0.22949564 = queryWeight, product of:
              5.9223356 = idf(docFreq=321, maxDocs=44218)
              0.03875087 = queryNorm
            0.3925991 = fieldWeight in 5725, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.9223356 = idf(docFreq=321, maxDocs=44218)
              0.046875 = fieldNorm(doc=5725)
      0.125 = coord(1/8)
    
    Abstract
    Although the development and implementation of metadata schemes over the last decade has been extensive, research examining the sum of these activities is limited. This limitation is likely due to the massive scope of the topic. A framework is needed to study the full extent of, and functionalities supported by, metadata schemes. Metadata schemes developed for information resources are analyzed. To begin, I present a review of the definition of metadata, metadata functions, and several metadata typologies. Next, a conceptualization for metadata schemes is presented. The emphasis is on semantic container-like metadata schemes (data structures). The last part of this paper introduces the MODAL (Metadata Objectives and principles, Domains, and Architectural Layout) framework as an approach for studying metadata schemes. The paper concludes with a brief discussion on value of frameworks for examining metadata schemes, including different types of metadata schemes.
  5. Shoffner, M.; Greenberg, J.; Kramer-Duffield, J.; Woodbury, D.: Web 2.0 semantic systems : collaborative learning in science (2008) 0.01
    0.008392914 = product of:
      0.033571657 = sum of:
        0.02307124 = weight(_text_:work in 2661) [ClassicSimilarity], result of:
          0.02307124 = score(doc=2661,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.16220987 = fieldWeight in 2661, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03125 = fieldNorm(doc=2661)
        0.010500416 = product of:
          0.021000832 = sum of:
            0.021000832 = weight(_text_:22 in 2661) [ClassicSimilarity], result of:
              0.021000832 = score(doc=2661,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.15476047 = fieldWeight in 2661, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2661)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    The basic goal of education within a discipline is to transform a novice into an expert. This entails moving the novice toward the "semantic space" that the expert inhabits-the space of concepts, meanings, vocabularies, and other intellectual constructs that comprise the discipline. Metadata is significant to this goal in digitally mediated education environments. Encoding the experts' semantic space not only enables the sharing of semantics among discipline scientists, but also creates an environment that bridges the semantic gap between the common vocabulary of the novice and the granular descriptive language of the seasoned scientist (Greenberg, et al, 2005). Developments underlying the Semantic Web, where vocabularies are formalized in the Web Ontology Language (OWL), and Web 2.0 approaches of user-generated folksonomies provide an infrastructure for linking vocabulary systems and promoting group learning via metadata literacy. Group learning is a pedagogical approach to teaching that harnesses the phenomenon of "collective intelligence" to increase learning by means of collaboration. Learning a new semantic system can be daunting for a novice, and yet it is integral to advance one's knowledge in a discipline and retain interest. These ideas are key to the "BOT 2.0: Botany through Web 2.0, the Memex and Social Learning" project (Bot 2.0).72 Bot 2.0 is a collaboration involving the North Carolina Botanical Garden, the UNC SILS Metadata Research center, and the Renaissance Computing Institute (RENCI). Bot 2.0 presents a curriculum utilizing a memex as a way for students to link and share digital information, working asynchronously in an environment beyond the traditional classroom. Our conception of a memex is not a centralized black box but rather a flexible, distributed framework that uses the most salient and easiest-to-use collaborative platforms (e.g., Facebook, Flickr, wiki and blog technology) for personal information management. By meeting students "where they live" digitally, we hope to attract students to the study of botanical science. A key aspect is to teach students scientific terminology and about the value of metadata, an inherent function in several of the technologies and in the instructional approach we are utilizing. This poster will report on a study examining the value of both folksonomies and taxonomies for post-secondary college students learning plant identification. Our data is drawn from a curriculum involving a virtual independent learning portion and a "BotCamp" weekend at UNC, where students work with digital plan specimens that they have captured. Results provide some insight into the importance of collaboration and shared vocabulary for gaining confidence and for student progression from novice to expert in botany.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  6. Greenberg, J.; Mayer-Patel, K.; Trujillo, S.: YouTube: applying FRBR and exploring the multiple description coding compression model (2012) 0.01
    0.005098072 = product of:
      0.040784575 = sum of:
        0.040784575 = weight(_text_:work in 1930) [ClassicSimilarity], result of:
          0.040784575 = score(doc=1930,freq=4.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.28674924 = fieldWeight in 1930, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1930)
      0.125 = coord(1/8)
    
    Abstract
    Nearly everyone who has searched YouTube for a favorite show, movie, news cast, or other known item, has retrieved multiple videos clips (or segments) that appear to duplicate, overlap, and relate. The work presented in this paper considers this challenge and reports on a study examining the applicability of the Functional Requirements for Bibliographic Records (FRBR) for relating varying renderings of YouTube videos. The paper also introduces the Multiple Description Coding Compression (MDC2) to extend FRBR and address YouTube preservation/storage challenges. The study sample included 20 video segments from YouTube; 10 connected with the event, Small Step for Man (US Astronaut Neil Armstrong's first step on the moon), and 10 with the 1966 classic movie, "Batman: The Movie." The FRBR analysis used a qualitative content analysis, and the MDC2 exploration was pursued via high-level approach of protocol modeling. Results indicate that FRBR is applicable to YouTube, although the analyses required a localization of the Work, Expression, Manifestation, and Item (WEMI) FRBR elements. The MDC2 exploration illustrates an approach for exploring FRBR in the context of other models, and identifies a potential means for addressing YouTube-related preservation/storage challenges.
  7. Crystal, A.; Greenberg, J.: Relevance criteria identified by health information users during Web searches (2006) 0.00
    0.0036048815 = product of:
      0.028839052 = sum of:
        0.028839052 = weight(_text_:work in 5909) [ClassicSimilarity], result of:
          0.028839052 = score(doc=5909,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.20276234 = fieldWeight in 5909, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5909)
      0.125 = coord(1/8)
    
    Abstract
    This article focuses on the relevance judgments made by health information users who use the Web. Health information users were conceptualized as motivated information users concerned about how an environmental issue affects their health. Users identified their own environmental health interests and conducted a Web search of a particular environmental health Web site. Users were asked to identify (by highlighting with a mouse) the criteria they use to assess relevance in both Web search engine surrogates and full-text Web documents. Content analysis of document criteria highlighted by users identified the criteria these users relied on most often. Key criteria identified included (in order of frequency of appearance) research, topic, scope, data, influence, affiliation, Web characteristics, and authority/ person. A power-law distribution of criteria was observed (a few criteria represented most of the highlighted regions, with a long tail of occasionally used criteria). Implications of this work are that information retrieval (IR) systems should be tailored in terms of users' tendencies to rely on certain document criteria, and that relevance research should combine methods to gather richer, contextualized data. Metadata for IR systems, such as that used in search engine surrogates, could be improved by taking into account actual usage of relevance criteria. Such metadata should be user-centered (based on data from users, as in this study) and contextappropriate (fit to users' situations and tasks).
  8. White, H.C.; Carrier, S.; Thompson, A.; Greenberg, J.; Scherle, R.: ¬The Dryad Data Repository : a Singapore framework metadata architecture in a DSpace environment (2008) 0.00
    0.002296966 = product of:
      0.018375728 = sum of:
        0.018375728 = product of:
          0.036751457 = sum of:
            0.036751457 = weight(_text_:22 in 2592) [ClassicSimilarity], result of:
              0.036751457 = score(doc=2592,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.2708308 = fieldWeight in 2592, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2592)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  9. White, H.; Willis, C.; Greenberg, J.: HIVEing : the effect of a semantic web technology on inter-indexer consistency (2014) 0.00
    0.00164069 = product of:
      0.01312552 = sum of:
        0.01312552 = product of:
          0.02625104 = sum of:
            0.02625104 = weight(_text_:22 in 1781) [ClassicSimilarity], result of:
              0.02625104 = score(doc=1781,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.19345059 = fieldWeight in 1781, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1781)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Abstract
    Purpose - The purpose of this paper is to examine the effect of the Helping Interdisciplinary Vocabulary Engineering (HIVE) system on the inter-indexer consistency of information professionals when assigning keywords to a scientific abstract. This study examined first, the inter-indexer consistency of potential HIVE users; second, the impact HIVE had on consistency; and third, challenges associated with using HIVE. Design/methodology/approach - A within-subjects quasi-experimental research design was used for this study. Data were collected using a task-scenario based questionnaire. Analysis was performed on consistency results using Hooper's and Rolling's inter-indexer consistency measures. A series of t-tests was used to judge the significance between consistency measure results. Findings - Results suggest that HIVE improves inter-indexing consistency. Working with HIVE increased consistency rates by 22 percent (Rolling's) and 25 percent (Hooper's) when selecting relevant terms from all vocabularies. A statistically significant difference exists between the assignment of free-text keywords and machine-aided keywords. Issues with homographs, disambiguation, vocabulary choice, and document structure were all identified as potential challenges. Research limitations/implications - Research limitations for this study can be found in the small number of vocabularies used for the study. Future research will include implementing HIVE into the Dryad Repository and studying its application in a repository system. Originality/value - This paper showcases several features used in HIVE system. By using traditional consistency measures to evaluate a semantic web technology, this paper emphasizes the link between traditional indexing and next generation machine-aided indexing (MAI) tools.
  10. Willis, C.; Greenberg, J.; White, H.: Analysis and synthesis of metadata goals for scientific data (2012) 0.00
    0.001312552 = product of:
      0.010500416 = sum of:
        0.010500416 = product of:
          0.021000832 = sum of:
            0.021000832 = weight(_text_:22 in 367) [ClassicSimilarity], result of:
              0.021000832 = score(doc=367,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.15476047 = fieldWeight in 367, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=367)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Abstract
    The proliferation of discipline-specific metadata schemes contributes to artificial barriers that can impede interdisciplinary and transdisciplinary research. The authors considered this problem by examining the domains, objectives, and architectures of nine metadata schemes used to document scientific data in the physical, life, and social sciences. They used a mixed-methods content analysis and Greenberg's () metadata objectives, principles, domains, and architectural layout (MODAL) framework, and derived 22 metadata-related goals from textual content describing each metadata scheme. Relationships are identified between the domains (e.g., scientific discipline and type of data) and the categories of scheme objectives. For each strong correlation (>0.6), a Fisher's exact test for nonparametric data was used to determine significance (p < .05). Significant relationships were found between the domains and objectives of the schemes. Schemes describing observational data are more likely to have "scheme harmonization" (compatibility and interoperability with related schemes) as an objective; schemes with the objective "abstraction" (a conceptual model exists separate from the technical implementation) also have the objective "sufficiency" (the scheme defines a minimal amount of information to meet the needs of the community); and schemes with the objective "data publication" do not have the objective "element refinement." The analysis indicates that many metadata-driven goals expressed by communities are independent of scientific discipline or the type of data, although they are constrained by historical community practices and workflows as well as the technological environment at the time of scheme creation. The analysis reveals 11 fundamental metadata goals for metadata documenting scientific data in support of sharing research data across disciplines and domains. The authors report these results and highlight the need for more metadata-related research, particularly in the context of recent funding agency policy changes.