Search (14 results, page 1 of 1)

  • × author_ss:"Greenberg, J."
  1. Greenberg, J.: User comprehension and application of information retrieval thesauri (2004) 0.04
    0.03754674 = product of:
      0.07509348 = sum of:
        0.031038022 = weight(_text_:data in 5008) [ClassicSimilarity], result of:
          0.031038022 = score(doc=5008,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2096163 = fieldWeight in 5008, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=5008)
        0.044055454 = product of:
          0.08811091 = sum of:
            0.08811091 = weight(_text_:processing in 5008) [ClassicSimilarity], result of:
              0.08811091 = score(doc=5008,freq=6.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.4648076 = fieldWeight in 5008, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5008)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    While information retrieval thesauri may improve search results, there is little research documenting whether general information system users employ these vocabulary tools. This article explores user comprehension and searching with thesauri. Data was gathered as part of a larger empirical query-expansion study involving the ProQuest Controlled Vocabulary. The results suggest that users' knowledge of thesauri is extremely limited. After receiving a basic thesaurus introduction, however, users indicate a desire to employ these tools. The most significant result was that users expressed a preference for thesauri employment through interactive processing or a combination of automatic and interactive processing, compared to exclusively automatic processing. This article defines information retrieval thesauri, summarizes research results, considers circumstances underlying users' knowledge and searching with thesauri, and highlights future research needs.
  2. Willis, C.; Greenberg, J.; White, H.: Analysis and synthesis of metadata goals for scientific data (2012) 0.04
    0.03738249 = product of:
      0.07476498 = sum of:
        0.062076043 = weight(_text_:data in 367) [ClassicSimilarity], result of:
          0.062076043 = score(doc=367,freq=18.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.4192326 = fieldWeight in 367, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=367)
        0.012688936 = product of:
          0.025377871 = sum of:
            0.025377871 = weight(_text_:22 in 367) [ClassicSimilarity], result of:
              0.025377871 = score(doc=367,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.15476047 = fieldWeight in 367, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=367)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The proliferation of discipline-specific metadata schemes contributes to artificial barriers that can impede interdisciplinary and transdisciplinary research. The authors considered this problem by examining the domains, objectives, and architectures of nine metadata schemes used to document scientific data in the physical, life, and social sciences. They used a mixed-methods content analysis and Greenberg's () metadata objectives, principles, domains, and architectural layout (MODAL) framework, and derived 22 metadata-related goals from textual content describing each metadata scheme. Relationships are identified between the domains (e.g., scientific discipline and type of data) and the categories of scheme objectives. For each strong correlation (>0.6), a Fisher's exact test for nonparametric data was used to determine significance (p < .05). Significant relationships were found between the domains and objectives of the schemes. Schemes describing observational data are more likely to have "scheme harmonization" (compatibility and interoperability with related schemes) as an objective; schemes with the objective "abstraction" (a conceptual model exists separate from the technical implementation) also have the objective "sufficiency" (the scheme defines a minimal amount of information to meet the needs of the community); and schemes with the objective "data publication" do not have the objective "element refinement." The analysis indicates that many metadata-driven goals expressed by communities are independent of scientific discipline or the type of data, although they are constrained by historical community practices and workflows as well as the technological environment at the time of scheme creation. The analysis reveals 11 fundamental metadata goals for metadata documenting scientific data in support of sharing research data across disciplines and domains. The authors report these results and highlight the need for more metadata-related research, particularly in the context of recent funding agency policy changes.
  3. White, H.C.; Carrier, S.; Thompson, A.; Greenberg, J.; Scherle, R.: ¬The Dryad Data Repository : a Singapore framework metadata architecture in a DSpace environment (2008) 0.03
    0.029208332 = product of:
      0.058416665 = sum of:
        0.036211025 = weight(_text_:data in 2592) [ClassicSimilarity], result of:
          0.036211025 = score(doc=2592,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24455236 = fieldWeight in 2592, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2592)
        0.022205638 = product of:
          0.044411276 = sum of:
            0.044411276 = weight(_text_:22 in 2592) [ClassicSimilarity], result of:
              0.044411276 = score(doc=2592,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.2708308 = fieldWeight in 2592, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2592)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  4. Li, K.; Greenberg, J.; Dunic, J.: Data objects and documenting scientific processes : an analysis of data events in biodiversity data papers (2020) 0.03
    0.027433997 = product of:
      0.10973599 = sum of:
        0.10973599 = weight(_text_:data in 5615) [ClassicSimilarity], result of:
          0.10973599 = score(doc=5615,freq=36.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.7411056 = fieldWeight in 5615, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5615)
      0.25 = coord(1/4)
    
    Abstract
    The data paper, an emerging scholarly genre, describes research data sets and is intended to bridge the gap between the publication of research data and scientific articles. Research examining how data papers report data events, such as data transactions and manipulations, is limited. The research reported on in this article addresses this limitation and investigated how data events are inscribed in data papers. A content analysis was conducted examining the full texts of 82 data papers, drawn from the curated list of data papers connected to the Global Biodiversity Information Facility. Data events recorded for each paper were organized into a set of 17 categories. Many of these categories are described together in the same sentence, which indicates the messiness of data events in the laboratory space. The findings challenge the degrees to which data papers are a distinct genre compared to research articles and they describe data-centric research processes in a through way. This article also discusses how our results could inform a better data publication ecosystem in the future.
  5. White, H.; Willis, C.; Greenberg, J.: HIVEing : the effect of a semantic web technology on inter-indexer consistency (2014) 0.02
    0.020863095 = product of:
      0.04172619 = sum of:
        0.02586502 = weight(_text_:data in 1781) [ClassicSimilarity], result of:
          0.02586502 = score(doc=1781,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.17468026 = fieldWeight in 1781, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1781)
        0.01586117 = product of:
          0.03172234 = sum of:
            0.03172234 = weight(_text_:22 in 1781) [ClassicSimilarity], result of:
              0.03172234 = score(doc=1781,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.19345059 = fieldWeight in 1781, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1781)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose - The purpose of this paper is to examine the effect of the Helping Interdisciplinary Vocabulary Engineering (HIVE) system on the inter-indexer consistency of information professionals when assigning keywords to a scientific abstract. This study examined first, the inter-indexer consistency of potential HIVE users; second, the impact HIVE had on consistency; and third, challenges associated with using HIVE. Design/methodology/approach - A within-subjects quasi-experimental research design was used for this study. Data were collected using a task-scenario based questionnaire. Analysis was performed on consistency results using Hooper's and Rolling's inter-indexer consistency measures. A series of t-tests was used to judge the significance between consistency measure results. Findings - Results suggest that HIVE improves inter-indexing consistency. Working with HIVE increased consistency rates by 22 percent (Rolling's) and 25 percent (Hooper's) when selecting relevant terms from all vocabularies. A statistically significant difference exists between the assignment of free-text keywords and machine-aided keywords. Issues with homographs, disambiguation, vocabulary choice, and document structure were all identified as potential challenges. Research limitations/implications - Research limitations for this study can be found in the small number of vocabularies used for the study. Future research will include implementing HIVE into the Dryad Repository and studying its application in a repository system. Originality/value - This paper showcases several features used in HIVE system. By using traditional consistency measures to evaluate a semantic web technology, this paper emphasizes the link between traditional indexing and next generation machine-aided indexing (MAI) tools.
  6. Shoffner, M.; Greenberg, J.; Kramer-Duffield, J.; Woodbury, D.: Web 2.0 semantic systems : collaborative learning in science (2008) 0.02
    0.016690476 = product of:
      0.03338095 = sum of:
        0.020692015 = weight(_text_:data in 2661) [ClassicSimilarity], result of:
          0.020692015 = score(doc=2661,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.1397442 = fieldWeight in 2661, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=2661)
        0.012688936 = product of:
          0.025377871 = sum of:
            0.025377871 = weight(_text_:22 in 2661) [ClassicSimilarity], result of:
              0.025377871 = score(doc=2661,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.15476047 = fieldWeight in 2661, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2661)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The basic goal of education within a discipline is to transform a novice into an expert. This entails moving the novice toward the "semantic space" that the expert inhabits-the space of concepts, meanings, vocabularies, and other intellectual constructs that comprise the discipline. Metadata is significant to this goal in digitally mediated education environments. Encoding the experts' semantic space not only enables the sharing of semantics among discipline scientists, but also creates an environment that bridges the semantic gap between the common vocabulary of the novice and the granular descriptive language of the seasoned scientist (Greenberg, et al, 2005). Developments underlying the Semantic Web, where vocabularies are formalized in the Web Ontology Language (OWL), and Web 2.0 approaches of user-generated folksonomies provide an infrastructure for linking vocabulary systems and promoting group learning via metadata literacy. Group learning is a pedagogical approach to teaching that harnesses the phenomenon of "collective intelligence" to increase learning by means of collaboration. Learning a new semantic system can be daunting for a novice, and yet it is integral to advance one's knowledge in a discipline and retain interest. These ideas are key to the "BOT 2.0: Botany through Web 2.0, the Memex and Social Learning" project (Bot 2.0).72 Bot 2.0 is a collaboration involving the North Carolina Botanical Garden, the UNC SILS Metadata Research center, and the Renaissance Computing Institute (RENCI). Bot 2.0 presents a curriculum utilizing a memex as a way for students to link and share digital information, working asynchronously in an environment beyond the traditional classroom. Our conception of a memex is not a centralized black box but rather a flexible, distributed framework that uses the most salient and easiest-to-use collaborative platforms (e.g., Facebook, Flickr, wiki and blog technology) for personal information management. By meeting students "where they live" digitally, we hope to attract students to the study of botanical science. A key aspect is to teach students scientific terminology and about the value of metadata, an inherent function in several of the technologies and in the instructional approach we are utilizing. This poster will report on a study examining the value of both folksonomies and taxonomies for post-secondary college students learning plant identification. Our data is drawn from a curriculum involving a virtual independent learning portion and a "BotCamp" weekend at UNC, where students work with digital plan specimens that they have captured. Results provide some insight into the importance of collaboration and shared vocabulary for gaining confidence and for student progression from novice to expert in botany.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  7. Newby, G.B.; Greenberg, J.; Jones, P.: Open source software development and Lotka's law : bibliometric patterns in programming (2003) 0.01
    0.013439858 = product of:
      0.053759433 = sum of:
        0.053759433 = weight(_text_:data in 5140) [ClassicSimilarity], result of:
          0.053759433 = score(doc=5140,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.3630661 = fieldWeight in 5140, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=5140)
      0.25 = coord(1/4)
    
    Abstract
    Newby, Greenberg, and Jones analyze programming productivity of open source software by counting registered developers contributions found in the Linux Software Map and in Scourceforge. Using seven years of data from a subset of the Linux directory tree LSM data provided 4503 files with 3341 unique author names. The distribution follows Lotka's Law with an exponent of 2.82 as verified by the Kolmolgorov-Smirnov one sample goodness of fit test. Scourceforge data is broken into developers and administrators, but when both were used as authors the Lotka distribution exponent of 2.55 produces the lowest error. This would not be significant by the K-S test but the 3.54% maximum error would indicate a fit and calls into question the appropriateness of K-S for large populations of authors.
  8. Crystal, A.; Greenberg, J.: Relevance criteria identified by health information users during Web searches (2006) 0.01
    0.011199882 = product of:
      0.04479953 = sum of:
        0.04479953 = weight(_text_:data in 5909) [ClassicSimilarity], result of:
          0.04479953 = score(doc=5909,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.30255508 = fieldWeight in 5909, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5909)
      0.25 = coord(1/4)
    
    Abstract
    This article focuses on the relevance judgments made by health information users who use the Web. Health information users were conceptualized as motivated information users concerned about how an environmental issue affects their health. Users identified their own environmental health interests and conducted a Web search of a particular environmental health Web site. Users were asked to identify (by highlighting with a mouse) the criteria they use to assess relevance in both Web search engine surrogates and full-text Web documents. Content analysis of document criteria highlighted by users identified the criteria these users relied on most often. Key criteria identified included (in order of frequency of appearance) research, topic, scope, data, influence, affiliation, Web characteristics, and authority/ person. A power-law distribution of criteria was observed (a few criteria represented most of the highlighted regions, with a long tail of occasionally used criteria). Implications of this work are that information retrieval (IR) systems should be tailored in terms of users' tendencies to rely on certain document criteria, and that relevance research should combine methods to gather richer, contextualized data. Metadata for IR systems, such as that used in search engine surrogates, could be improved by taking into account actual usage of relevance criteria. Such metadata should be user-centered (based on data from users, as in this study) and contextappropriate (fit to users' situations and tasks).
  9. Greenberg, J.: Optimal query expansion (QE) processing methods with semantically encoded structured thesaurus terminology (2001) 0.01
    0.011013864 = product of:
      0.044055454 = sum of:
        0.044055454 = product of:
          0.08811091 = sum of:
            0.08811091 = weight(_text_:processing in 5750) [ClassicSimilarity], result of:
              0.08811091 = score(doc=5750,freq=6.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.4648076 = fieldWeight in 5750, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5750)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    While researchers have explored the value of structured thesauri as controlled vocabularies for general information retrieval (IR) activities, they have not identified the optimal query expansion (QE) processing methods for taking advantage of the semantic encoding underlying the terminology in these tools. The study reported on in this article addresses this question, and examined whether QE via semantically encoded thesauri terminology is more effective in the automatic or interactive processing environment. The research found that, regardless of end-users' retrieval goals, synonyms and partial synonyms (SYNs) and narrower terms (NTs) are generally good candidates for automatic QE and that related (RTs) are better candidates for interactive QE. The study also examined end-users' selection of semantically encoded thesauri terms for interactive QE, and explored how retrieval goals and QE processes may be combined in future thesauri-supported IR systems
  10. Greenberg, J.: Metadata and the World Wide Web (2002) 0.01
    0.009144665 = product of:
      0.03657866 = sum of:
        0.03657866 = weight(_text_:data in 4264) [ClassicSimilarity], result of:
          0.03657866 = score(doc=4264,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24703519 = fieldWeight in 4264, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4264)
      0.25 = coord(1/4)
    
    Abstract
    Metadata is of paramount importance for persons, organizations, and endeavors of every dimension that are increasingly turning to the World Wide Web (hereafter referred to as the Web) as a chief conduit for accessing and disseminating information. This is evidenced by the development and implementation of metadata schemas supporting projects ranging from restricted corporate intranets, data warehouses, and consumer-oriented electronic commerce enterprises to freely accessible digital libraries, educational initiatives, virtual museums, and other public Web sites. Today's metadata activities are unprecedented because they extend beyond the traditional library environment in an effort to deal with the Web's exponential growth. This article considers metadata in today's Web environment. The article defines metadata, examines the relationship between metadata and cataloging, provides definitions for key metadata vocabulary terms, and explores the topic of metadata generation. Metadata is an extensive and expanding subject that is prevalent in many environments. For practical reasons, this article has elected to concentrate an the information resource domain, which is defined by electronic textual documents, graphical images, archival materials, museum artifacts, and other objects found in both digital and physical information centers (e.g., libraries, museums, record centers, and archives). To show the extent and larger application of metadata, several examples are also drawn from the data warehouse, electronic commerce, open source, and medical communities.
  11. Greenberg, J.: Theoretical considerations of lifecycle modeling : an analysis of the Dryad Repository demonstrating automatic metadata propagation, inheritance, and value system adoption (2009) 0.01
    0.009144665 = product of:
      0.03657866 = sum of:
        0.03657866 = weight(_text_:data in 2990) [ClassicSimilarity], result of:
          0.03657866 = score(doc=2990,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24703519 = fieldWeight in 2990, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2990)
      0.25 = coord(1/4)
    
    Abstract
    The Dryad repository is for data supporting published research in the field of evolutionary biology and related disciplines. Dryad development team members seek a theoretical framework to aid communication about metadata issues and plans. This article explores lifecycle modeling as a theoretical framework for understanding metadata in the repostiroy enivornment. A background discussion reviews the importance of theory, the status of a metadata theory, and lifecycle concepts. An analysis draws examples from the Dryad repository demonstrating automatic propagation, metadata inheritance, and value system adoption, and reports results from a faceted term mapping experiment that included 12 vocabularies and approximately 600 terms. The article also reports selected key findings from a recent survey on the data-sharing attitudes and behaviors of nearly 400 evolutionary biologists. Te results confirm the applicability of lifecycle modeling to Dryad's metadata infrastructure. The article concludes that lifecycle modeling provides a theoretical framework that can enhance our understanding of metadata, aid communication about the topic of metadata in the repository environment, and potentially help sustain robust repository development.
  12. Greenberg, J.: Reference structures : stagnation, progress, and future challenges (1997) 0.01
    0.009052756 = product of:
      0.036211025 = sum of:
        0.036211025 = weight(_text_:data in 1103) [ClassicSimilarity], result of:
          0.036211025 = score(doc=1103,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24455236 = fieldWeight in 1103, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1103)
      0.25 = coord(1/4)
    
    Abstract
    Assesses the current state of reference structures in OPACs in a framework defined by stagnation, progress, and future challenges. 'Stagnation' referes to the limited and inconsistent reference structure access provided in current OPACs. 'Progress' refers to improved OPAC reference structure access and reference structure possibilities that extend beyond those commonly represented in existing subject autgority control tools. The progress discussion is supported by a look at professional committee work, data modelling ideas, ontological theory, and one area of linguistic research. The discussion ends with a list of 6 areas needing attention if reference structure access is to be improved in the future OPAC environment
  13. Greenberg, J.: Understanding metadata and metadata scheme (2005) 0.01
    0.0077595054 = product of:
      0.031038022 = sum of:
        0.031038022 = weight(_text_:data in 5725) [ClassicSimilarity], result of:
          0.031038022 = score(doc=5725,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2096163 = fieldWeight in 5725, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=5725)
      0.25 = coord(1/4)
    
    Abstract
    Although the development and implementation of metadata schemes over the last decade has been extensive, research examining the sum of these activities is limited. This limitation is likely due to the massive scope of the topic. A framework is needed to study the full extent of, and functionalities supported by, metadata schemes. Metadata schemes developed for information resources are analyzed. To begin, I present a review of the definition of metadata, metadata functions, and several metadata typologies. Next, a conceptualization for metadata schemes is presented. The emphasis is on semantic container-like metadata schemes (data structures). The last part of this paper introduces the MODAL (Metadata Objectives and principles, Domains, and Architectural Layout) framework as an approach for studying metadata schemes. The paper concludes with a brief discussion on value of frameworks for examining metadata schemes, including different types of metadata schemes.
  14. Greenberg, J.; Zhao, X.; Monselise, M.; Grabus, S.; Boone, J.: Knowledge organization systems : a network for AI with helping interdisciplinary vocabulary engineering (2021) 0.01
    0.007418666 = product of:
      0.029674664 = sum of:
        0.029674664 = product of:
          0.05934933 = sum of:
            0.05934933 = weight(_text_:processing in 719) [ClassicSimilarity], result of:
              0.05934933 = score(doc=719,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.3130829 = fieldWeight in 719, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=719)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Knowledge Organization Systems (KOS) as networks of knowledge have the potential to inform AI operations. This paper explores natural language processing and machine learning in the context of KOS and Helping Interdisciplinary Vocabulary Engineering (HIVE) technology. The paper presents three use cases: HIVE and Historical Knowledge Networks, HIVE for Materials Science (HIVE-4-MAT), and Using HIVE to Enhance and Explore Medical Ontologies. The background section reviews AI foundations, while the use cases provide a frame of reference for discussing current progress and implications of connecting KOS to AI in digital resource collections.