Search (235 results, page 1 of 12)

  • × theme_ss:"Semantic Web"
  1. Shaw, R.; Buckland, M.: Open identification and linking of the four Ws (2008) 0.11
    0.10974465 = product of:
      0.1463262 = sum of:
        0.023237456 = weight(_text_:for in 2665) [ClassicSimilarity], result of:
          0.023237456 = score(doc=2665,freq=26.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.26177883 = fieldWeight in 2665, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2665)
        0.111878954 = weight(_text_:computing in 2665) [ClassicSimilarity], result of:
          0.111878954 = score(doc=2665,freq=8.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.42780277 = fieldWeight in 2665, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2665)
        0.011209788 = product of:
          0.022419576 = sum of:
            0.022419576 = weight(_text_:22 in 2665) [ClassicSimilarity], result of:
              0.022419576 = score(doc=2665,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.1354154 = fieldWeight in 2665, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2665)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Platforms for social computing connect users via shared references to people with whom they have relationships, events attended, places lived in or traveled to, and topics such as favorite books or movies. Since free text is insufficient for expressing such references precisely and unambiguously, many social computing platforms coin identifiers for topics, places, events, and people and provide interfaces for finding and selecting these identifiers from controlled lists. Using these interfaces, users collaboratively construct a web of links among entities. This model needn't be limited to social networking sites. Understanding an item in a digital library or museum requires context: information about the topics, places, events, and people to which the item is related. Students, journalists and investigators traditionally discover this kind of context by asking "the four Ws": what, where, when and who. The DCMI Kernel Metadata Community has recognized the four Ws as fundamental elements of descriptions (Kunze & Turner, 2007). Making better use of metadata to answer these questions via links to appropriate contextual resources has been our focus in a series of research projects over the past few years. Currently we are building a system for enabling readers of any text to relate any topic, place, event or person mentioned in the text to the best explanatory resources available. This system is being developed with two different corpora: a diverse variety of biographical texts characterized by very rich and dense mentions of people, events, places and activities, and a large collection of newly-scanned books, journals and manuscripts relating to Irish culture and history. Like a social computing platform, our system consists of tools for referring to topics, places, events or people, disambiguating these references by linking them to unique identifiers, and using the disambiguated references to provide useful information in context and to link to related resources. Yet current social computing platforms, while usually amenable to importing and exporting data, tend to mint proprietary identifiers and expect links to be traversed using their own interfaces. We take a different approach, using identifiers from both established and emerging naming authorities, representing relationships using standardized metadata vocabularies, and publishing those representations using standard protocols so that links can be stored and traversed anywhere. Central to our strategy is to move from appearances in a text to naming authorities to the the construction of links for searching or querying trusted resources. Using identifiers from naming authorities, rather than literal values (as in the DCMI Kernel) or keys from a proprietary database, makes it more likely that links constructed using our system will continue to be useful in the future. WorldCat Identities URIs (http://worldcat.org/identities/) linked to Library of Congress and Deutsche Nationalbibliothek authority files for persons and organizations and Geonames (http://geonames.org/) URIs for places are stable identifiers attached to a wealth of useful metadata. Yet no naming authority can be totally comprehensive, so our system can be extended to use new sources of identifiers as needed. For example, we are experimenting with using Freebase (http://freebase.com/) URIs to identify historical events, for which no established naming authority currently exists. Stable identifiers (URIs), standardized hyperlinked data formats (XML), and uniform publishing protocols (HTTP) are key ingredients of the web's open architecture. Our system provides an example of how this open architecture can be exploited to build flexible and useful tools for connecting resources via shared references to topics, places, events, and people.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  2. Shoffner, M.; Greenberg, J.; Kramer-Duffield, J.; Woodbury, D.: Web 2.0 semantic systems : collaborative learning in science (2008) 0.07
    0.07318133 = product of:
      0.09757511 = sum of:
        0.020833097 = weight(_text_:for in 2661) [ClassicSimilarity], result of:
          0.020833097 = score(doc=2661,freq=16.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.23469281 = fieldWeight in 2661, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.03125 = fieldNorm(doc=2661)
        0.06393083 = weight(_text_:computing in 2661) [ClassicSimilarity], result of:
          0.06393083 = score(doc=2661,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.24445872 = fieldWeight in 2661, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.03125 = fieldNorm(doc=2661)
        0.012811186 = product of:
          0.025622372 = sum of:
            0.025622372 = weight(_text_:22 in 2661) [ClassicSimilarity], result of:
              0.025622372 = score(doc=2661,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.15476047 = fieldWeight in 2661, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2661)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The basic goal of education within a discipline is to transform a novice into an expert. This entails moving the novice toward the "semantic space" that the expert inhabits-the space of concepts, meanings, vocabularies, and other intellectual constructs that comprise the discipline. Metadata is significant to this goal in digitally mediated education environments. Encoding the experts' semantic space not only enables the sharing of semantics among discipline scientists, but also creates an environment that bridges the semantic gap between the common vocabulary of the novice and the granular descriptive language of the seasoned scientist (Greenberg, et al, 2005). Developments underlying the Semantic Web, where vocabularies are formalized in the Web Ontology Language (OWL), and Web 2.0 approaches of user-generated folksonomies provide an infrastructure for linking vocabulary systems and promoting group learning via metadata literacy. Group learning is a pedagogical approach to teaching that harnesses the phenomenon of "collective intelligence" to increase learning by means of collaboration. Learning a new semantic system can be daunting for a novice, and yet it is integral to advance one's knowledge in a discipline and retain interest. These ideas are key to the "BOT 2.0: Botany through Web 2.0, the Memex and Social Learning" project (Bot 2.0).72 Bot 2.0 is a collaboration involving the North Carolina Botanical Garden, the UNC SILS Metadata Research center, and the Renaissance Computing Institute (RENCI). Bot 2.0 presents a curriculum utilizing a memex as a way for students to link and share digital information, working asynchronously in an environment beyond the traditional classroom. Our conception of a memex is not a centralized black box but rather a flexible, distributed framework that uses the most salient and easiest-to-use collaborative platforms (e.g., Facebook, Flickr, wiki and blog technology) for personal information management. By meeting students "where they live" digitally, we hope to attract students to the study of botanical science. A key aspect is to teach students scientific terminology and about the value of metadata, an inherent function in several of the technologies and in the instructional approach we are utilizing. This poster will report on a study examining the value of both folksonomies and taxonomies for post-secondary college students learning plant identification. Our data is drawn from a curriculum involving a virtual independent learning portion and a "BotCamp" weekend at UNC, where students work with digital plan specimens that they have captured. Results provide some insight into the importance of collaboration and shared vocabulary for gaining confidence and for student progression from novice to expert in botany.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  3. Semantic applications (2018) 0.07
    0.06868714 = product of:
      0.13737428 = sum of:
        0.024359472 = weight(_text_:for in 5204) [ClassicSimilarity], result of:
          0.024359472 = score(doc=5204,freq=14.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.27441877 = fieldWeight in 5204, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5204)
        0.11301481 = weight(_text_:computing in 5204) [ClassicSimilarity], result of:
          0.11301481 = score(doc=5204,freq=4.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.43214604 = fieldWeight in 5204, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5204)
      0.5 = coord(2/4)
    
    Abstract
    This book describes proven methodologies for developing semantic applications: software applications which explicitly or implicitly uses the semantics (i.e., the meaning) of a domain terminology in order to improve usability, correctness, and completeness. An example is semantic search, where synonyms and related terms are used for enriching the results of a simple text-based search. Ontologies, thesauri or controlled vocabularies are the centerpiece of semantic applications. The book includes technological and architectural best practices for corporate use.
    Content
    Introduction.- Ontology Development.- Compliance using Metadata.- Variety Management for Big Data.- Text Mining in Economics.- Generation of Natural Language Texts.- Sentiment Analysis.- Building Concise Text Corpora from Web Contents.- Ontology-Based Modelling of Web Content.- Personalized Clinical Decision Support for Cancer Care.- Applications of Temporal Conceptual Semantic Systems.- Context-Aware Documentation in the Smart Factory.- Knowledge-Based Production Planning for Industry 4.0.- Information Exchange in Jurisdiction.- Supporting Automated License Clearing.- Managing cultural assets: Implementing typical cultural heritage archive's usage scenarios via Semantic Web technologies.- Semantic Applications for Process Management.- Domain-Specific Semantic Search Applications.
    LCSH
    Management of Computing and Information Systems
    Subject
    Management of Computing and Information Systems
  4. Fensel, D.; Staab, S.; Studer, R.; Harmelen, F. van; Davies, J.: ¬A future perspective : exploiting peer-to-peer and the Semantic Web for knowledge management (2004) 0.07
    0.065053955 = product of:
      0.13010791 = sum of:
        0.01822896 = weight(_text_:for in 2262) [ClassicSimilarity], result of:
          0.01822896 = score(doc=2262,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.20535621 = fieldWeight in 2262, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2262)
        0.111878954 = weight(_text_:computing in 2262) [ClassicSimilarity], result of:
          0.111878954 = score(doc=2262,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.42780277 = fieldWeight in 2262, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2262)
      0.5 = coord(2/4)
    
    Abstract
    Over the past few years, we have seen a growing interest in the potential of both peer-to-peer (P2P) computing and the use of more formal approaches to knowledge management, involving the development of ontologies. This penultimate chapter discusses possibilities that both approaches may offer for more effective and efficient knowledge management. In particular, we investigate how the two paradigms may be combined. In this chapter, we describe our vision in terms of a set of future steps that need to be taken to bring the results described in earlier chapters to their full potential.
  5. Semantic Web services challenge : results from the first year (2009) 0.06
    0.06030063 = product of:
      0.12060126 = sum of:
        0.024705013 = weight(_text_:for in 2479) [ClassicSimilarity], result of:
          0.024705013 = score(doc=2479,freq=10.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.27831143 = fieldWeight in 2479, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=2479)
        0.095896244 = weight(_text_:computing in 2479) [ClassicSimilarity], result of:
          0.095896244 = score(doc=2479,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.36668807 = fieldWeight in 2479, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.046875 = fieldNorm(doc=2479)
      0.5 = coord(2/4)
    
    Abstract
    Service-Oriented Computing is one of the most promising software engineering trends for future distributed systems. Currently there are many different approaches to semantic web service descriptions and many frameworks built around them. Yet a common understanding, evaluation scheme, and test bed to compare and classify these frameworks in terms of their abilities and shortcomings, is still missing. "Semantic Web Services Challenge" is an edited volume that develops this common understanding of the various technologies intended to facilitate the automation of mediation, choreography and discovery for Web Services using semantic annotations. "Semantic Web Services Challenge" is designed for a professional audience composed of practitioners and researchers in industry. Professionals can use this book to evaluate SWS technology for their potential practical use. The book is also suitable for advanced-level students in computer science.
  6. Hitzler, P.; Krötzsch, M.; Rudolph, S.: Foundations of Semantic Web technologies (2010) 0.05
    0.05494971 = product of:
      0.10989942 = sum of:
        0.019487578 = weight(_text_:for in 359) [ClassicSimilarity], result of:
          0.019487578 = score(doc=359,freq=14.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.21953502 = fieldWeight in 359, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.03125 = fieldNorm(doc=359)
        0.09041184 = weight(_text_:computing in 359) [ClassicSimilarity], result of:
          0.09041184 = score(doc=359,freq=4.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.34571683 = fieldWeight in 359, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.03125 = fieldNorm(doc=359)
      0.5 = coord(2/4)
    
    Abstract
    This text introduces the standardized knowledge representation languages for modeling ontologies operating at the core of the semantic web. It covers RDF schema, Web Ontology Language (OWL), rules, query languages, the OWL 2 revision, and the forthcoming Rule Interchange Format (RIF). A 2010 CHOICE Outstanding Academic Title ! The nine chapters of the book guide the reader through the major foundational languages for the semantic Web and highlight the formal semantics. ! the book has very interesting supporting material and exercises, is oriented to W3C standards, and provides the necessary foundations for the semantic Web. It will be easy to follow by the computer scientist who already has a basic background on semantic Web issues; it will also be helpful for both self-study and teaching purposes. I recommend this book primarily as a complementary textbook for a graduate or undergraduate course in a computer science or a Web science academic program. --Computing Reviews, February 2010 This book is unique in several respects. It contains an in-depth treatment of all the major foundational languages for the Semantic Web and provides a full treatment of the underlying formal semantics, which is central to the Semantic Web effort. It is also the very first textbook that addresses the forthcoming W3C recommended standards OWL 2 and RIF. Furthermore, the covered topics and underlying concepts are easily accessible for the reader due to a clear separation of syntax and semantics ! I am confident this book will be well received and play an important role in training a larger number of students who will seek to become proficient in this growing discipline.
    Series
    Chapman & Hall/CRC textbooks in computing
  7. Brambilla, M.; Ceri, S.: Designing exploratory search applications upon Web data sources (2012) 0.04
    0.043013833 = product of:
      0.08602767 = sum of:
        0.022096835 = weight(_text_:for in 428) [ClassicSimilarity], result of:
          0.022096835 = score(doc=428,freq=18.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.2489293 = fieldWeight in 428, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.03125 = fieldNorm(doc=428)
        0.06393083 = weight(_text_:computing in 428) [ClassicSimilarity], result of:
          0.06393083 = score(doc=428,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.24445872 = fieldWeight in 428, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.03125 = fieldNorm(doc=428)
      0.5 = coord(2/4)
    
    Abstract
    Search is the preferred method to access information in today's computing systems. The Web, accessed through search engines, is universally recognized as the source for answering users' information needs. However, offering a link to a Web page does not cover all information needs. Even simple problems, such as "Which theater offers an at least three-stars action movie in London close to a good Italian restaurant," can only be solved by searching the Web multiple times, e.g., by extracting a list of the recent action movies filtered by ranking, then looking for movie theaters, then looking for Italian restaurants close to them. While search engines hint to useful information, the user's brain is the fundamental platform for information integration. An important trend is the availability of new, specialized data sources-the so-called "long tail" of the Web of data. Such carefully collected and curated data sources can be much more valuable than information currently available in Web pages; however, many sources remain hidden or insulated, in the lack of software solutions for bringing them to surface and making them usable in the search context. A new class of tailor-made systems, designed to satisfy the needs of users with specific aims, will support the publishing and integration of data sources for vertical domains; the user will be able to select sources based on individual or collective trust, and systems will be able to route queries to such sources and to provide easyto-use interfaces for combining them within search strategies, at the same time, rewarding the data source owners for each contribution to effective search. Efforts such as Google's Fusion Tables show that the technology for bringing hidden data sources to surface is feasible.
  8. Dextre Clarke, S.G.: Challenges and opportunities for KOS standards (2007) 0.04
    0.040648535 = product of:
      0.08129707 = sum of:
        0.03645792 = weight(_text_:for in 4643) [ClassicSimilarity], result of:
          0.03645792 = score(doc=4643,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.41071242 = fieldWeight in 4643, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.109375 = fieldNorm(doc=4643)
        0.04483915 = product of:
          0.0896783 = sum of:
            0.0896783 = weight(_text_:22 in 4643) [ClassicSimilarity], result of:
              0.0896783 = score(doc=4643,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.5416616 = fieldWeight in 4643, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4643)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    Beitrag anläßlich des Seminars "Tools for knowledge organization - ISKO UK Seminar", 4. September 2007
    Date
    22. 9.2007 15:41:14
  9. Spinning the Semantic Web : bringing the World Wide Web to its full potential (2003) 0.04
    0.03913265 = product of:
      0.0782653 = sum of:
        0.022325827 = weight(_text_:for in 1981) [ClassicSimilarity], result of:
          0.022325827 = score(doc=1981,freq=24.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.25150898 = fieldWeight in 1981, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1981)
        0.055939477 = weight(_text_:computing in 1981) [ClassicSimilarity], result of:
          0.055939477 = score(doc=1981,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.21390139 = fieldWeight in 1981, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1981)
      0.5 = coord(2/4)
    
    Abstract
    As the World Wide Web continues to expand, it becomes increasingly difficult for users to obtain information efficiently. Because most search engines read format languages such as HTML or SGML, search results reflect formatting tags more than actual page content, which is expressed in natural language. Spinning the Semantic Web describes an exciting new type of hierarchy and standardization that will replace the current "Web of links" with a "Web of meaning." Using a flexible set of languages and tools, the Semantic Web will make all available information - display elements, metadata, services, images, and especially content - accessible. The result will be an immense repository of information accessible for a wide range of new applications. This first handbook for the Semantic Web covers, among other topics, software agents that can negotiate and collect information, markup languages that can tag many more types of information in a document, and knowledge systems that enable machines to read Web pages and determine their reliability. The truly interdisciplinary Semantic Web combines aspects of artificial intelligence, markup languages, natural language processing, information retrieval, knowledge representation, intelligent agents, and databases.
    Content
    Inhalt: Tim Bemers-Lee: The Original Dream - Re-enter Machines - Where Are We Now? - The World Wide Web Consortium - Where Is the Web Going Next? / Dieter Fensel, James Hendler, Henry Lieberman, and Wolfgang Wahlster: Why Is There a Need for the Semantic Web and What Will It Provide? - How the Semantic Web Will Be Possible / Jeff Heflin, James Hendler, and Sean Luke: SHOE: A Blueprint for the Semantic Web / Deborah L. McGuinness, Richard Fikes, Lynn Andrea Stein, and James Hendler: DAML-ONT: An Ontology Language for the Semantic Web / Michel Klein, Jeen Broekstra, Dieter Fensel, Frank van Harmelen, and Ian Horrocks: Ontologies and Schema Languages on the Web / Borys Omelayenko, Monica Crubezy, Dieter Fensel, Richard Benjamins, Bob Wielinga, Enrico Motta, Mark Musen, and Ying Ding: UPML: The Language and Tool Support for Making the Semantic Web Alive / Deborah L. McGuinness: Ontologies Come of Age / Jeen Broekstra, Arjohn Kampman, and Frank van Harmelen: Sesame: An Architecture for Storing and Querying RDF Data and Schema Information / Rob Jasper and Mike Uschold: Enabling Task-Centered Knowledge Support through Semantic Markup / Yolanda Gil: Knowledge Mobility: Semantics for the Web as a White Knight for Knowledge-Based Systems / Sanjeev Thacker, Amit Sheth, and Shuchi Patel: Complex Relationships for the Semantic Web / Alexander Maedche, Steffen Staab, Nenad Stojanovic, Rudi Studer, and York Sure: SEmantic portAL: The SEAL Approach / Ora Lassila and Mark Adler: Semantic Gadgets: Ubiquitous Computing Meets the Semantic Web / Christopher Frye, Mike Plusch, and Henry Lieberman: Static and Dynamic Semantics of the Web / Masahiro Hori: Semantic Annotation for Web Content Adaptation / Austin Tate, Jeff Dalton, John Levine, and Alex Nixon: Task-Achieving Agents on the World Wide Web
  10. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.03
    0.034774087 = product of:
      0.069548175 = sum of:
        0.050060596 = product of:
          0.15018179 = sum of:
            0.15018179 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.15018179 = score(doc=701,freq=2.0), product of:
                0.40082818 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.047278564 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.019487578 = weight(_text_:for in 701) [ClassicSimilarity], result of:
          0.019487578 = score(doc=701,freq=14.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.21953502 = fieldWeight in 701, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.5 = coord(2/4)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  11. Broughton, V.: Automatic metadata generation : Digital resource description without human intervention (2007) 0.03
    0.030265197 = product of:
      0.060530394 = sum of:
        0.022096837 = weight(_text_:for in 6048) [ClassicSimilarity], result of:
          0.022096837 = score(doc=6048,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.24892932 = fieldWeight in 6048, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.09375 = fieldNorm(doc=6048)
        0.038433556 = product of:
          0.07686711 = sum of:
            0.07686711 = weight(_text_:22 in 6048) [ClassicSimilarity], result of:
              0.07686711 = score(doc=6048,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.46428138 = fieldWeight in 6048, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6048)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    Beitrag anläßlich des Seminars "Tools for knowledge organization - ISKO UK Seminar", 4. September 2007.
    Date
    22. 9.2007 15:41:14
  12. Tudhope, D.: Knowledge Organization System Services : brief review of NKOS activities and possibility of KOS registries (2007) 0.03
    0.030265197 = product of:
      0.060530394 = sum of:
        0.022096837 = weight(_text_:for in 100) [ClassicSimilarity], result of:
          0.022096837 = score(doc=100,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.24892932 = fieldWeight in 100, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.09375 = fieldNorm(doc=100)
        0.038433556 = product of:
          0.07686711 = sum of:
            0.07686711 = weight(_text_:22 in 100) [ClassicSimilarity], result of:
              0.07686711 = score(doc=100,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.46428138 = fieldWeight in 100, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=100)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    Beitrag anläßlich des Seminars "Tools for knowledge organization - ISKO UK Seminar", 4. September 2007.
    Date
    22. 9.2007 15:41:14
  13. ¬The Semantic Web - ISWC 2010 : 9th International Semantic Web Conference, ISWC 2010, Shanghai, China, November 7-11, 2010, Revised Selected Papers, Part 2. (2010) 0.03
    0.028253702 = product of:
      0.11301481 = sum of:
        0.11301481 = weight(_text_:computing in 4706) [ClassicSimilarity], result of:
          0.11301481 = score(doc=4706,freq=4.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.43214604 = fieldWeight in 4706, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4706)
      0.25 = coord(1/4)
    
    Abstract
    The two-volume set LNCS 6496 and 6497 constitutes the refereed proceedings of the 9th International Semantic Web Conference, ISWC 2010, held in Shanghai, China, during November 7-11, 2010. Part I contains 51 papers out of 578 submissions to the research track. Part II contains 18 papers out of 66 submissions to the semantic Web in-use track, 6 papers out of 26 submissions to the doctoral consortium track, and also 4 invited talks. Each submitted paper were carefully reviewed. The International Semantic Web Conferences (ISWC) constitute the major international venue where the latest research results and technical innovations on all aspects of the Semantic Web are presented. ISWC brings together researchers, practitioners, and users from the areas of artificial intelligence, databases, social networks, distributed computing, Web engineering, information systems, natural language processing, soft computing, and human computer interaction to discuss the major challenges and proposed solutions, the success stories and failures, as well the visions that can advance research and drive innovation in the Semantic Web.
  14. Weibel, S.L.: Social Bibliography : a personal perspective on libraries and the Semantic Web (2006) 0.03
    0.027969738 = product of:
      0.111878954 = sum of:
        0.111878954 = weight(_text_:computing in 250) [ClassicSimilarity], result of:
          0.111878954 = score(doc=250,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.42780277 = fieldWeight in 250, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0546875 = fieldNorm(doc=250)
      0.25 = coord(1/4)
    
    Abstract
    This paper presents a personal perspective on libraries and the Semantic Web. The paper discusses computing power, increased availability of processable text, social software developments and the ideas underlying Web 2.0 and the impact of these developments in the context of libraries and information. The article concludes with a discussion of social bibliography and the declining hegemony of catalog records, and emphasizes the strengths of librarianship and the profession's ability to contribute to Semantic Web development.
  15. Prud'hommeaux, E.; Gayo, E.: RDF ventures to boldly meet your most pedestrian needs (2015) 0.03
    0.025233213 = product of:
      0.050466426 = sum of:
        0.031249646 = weight(_text_:for in 2024) [ClassicSimilarity], result of:
          0.031249646 = score(doc=2024,freq=16.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.35203922 = fieldWeight in 2024, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=2024)
        0.019216778 = product of:
          0.038433556 = sum of:
            0.038433556 = weight(_text_:22 in 2024) [ClassicSimilarity], result of:
              0.038433556 = score(doc=2024,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.23214069 = fieldWeight in 2024, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2024)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Defined in 1999 and paired with XML, the Resource Description Framework (RDF) has been cast as an RDF Schema, producing data that is well-structured but not validated, permitting certain illogical relationships. When stakeholders convened in 2014 to consider solutions to the data validation challenge, a W3C working group proposed Resource Shapes and Shape Expressions to describe the properties expected for an RDF node. Resistance rose from concerns about data and schema reuse, key principles in RDF. Ideally data types and properties are designed for broad use, but they are increasingly adopted with local restrictions for specific purposes. Resource Shapes are commonly treated as record classes, standing in for data structures but losing flexibility for later reuse. Of various solutions to the resulting tensions, the concept of record classes may be the most reasonable basis for agreement, satisfying stakeholders' objectives while allowing for variations with constraints.
    Source
    Bulletin of the Association for Information Science and Technology. 41(2015) no.4, S.18-22
  16. OWL Web Ontology Language Test Cases (2004) 0.02
    0.023227734 = product of:
      0.04645547 = sum of:
        0.020833097 = weight(_text_:for in 4685) [ClassicSimilarity], result of:
          0.020833097 = score(doc=4685,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.23469281 = fieldWeight in 4685, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0625 = fieldNorm(doc=4685)
        0.025622372 = product of:
          0.051244743 = sum of:
            0.051244743 = weight(_text_:22 in 4685) [ClassicSimilarity], result of:
              0.051244743 = score(doc=4685,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.30952093 = fieldWeight in 4685, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4685)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This document contains and presents test cases for the Web Ontology Language (OWL) approved by the Web Ontology Working Group. Many of the test cases illustrate the correct usage of the Web Ontology Language (OWL), and the formal meaning of its constructs. Other test cases illustrate the resolution of issues considered by the Working Group. Conformance for OWL documents and OWL document checkers is specified.
    Date
    14. 8.2011 13:33:22
  17. ¬The Semantic Web - ISWC 2010 : 9th International Semantic Web Conference, ISWC 2010, Shanghai, China, November 7-11, 2010, Revised Selected Papers, Part I. (2010) 0.02
    0.02260296 = product of:
      0.09041184 = sum of:
        0.09041184 = weight(_text_:computing in 4707) [ClassicSimilarity], result of:
          0.09041184 = score(doc=4707,freq=4.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.34571683 = fieldWeight in 4707, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.03125 = fieldNorm(doc=4707)
      0.25 = coord(1/4)
    
    Abstract
    The two-volume set LNCS 6496 and 6497 constitutes the refereed proceedings of the 9th International Semantic Web Conference, ISWC 2010, held in Shanghai, China, during November 7-11, 2010. Part I contains 51 papers out of 578 submissions to the research track. Part II contains 18 papers out of 66 submissions to the semantic Web in-use track, 6 papers out of 26 submissions to the doctoral consortium track, and also 4 invited talks. Each submitted paper were carefully reviewed. The International Semantic Web Conferences (ISWC) constitute the major international venue where the latest research results and technical innovations on all aspects of the Semantic Web are presented. ISWC brings together researchers, practitioners, and users from the areas of artificial intelligence, databases, social networks, distributed computing, Web engineering, information systems, natural language processing, soft computing, and human computer interaction to discuss the major challenges and proposed solutions, the success stories and failures, as well the visions that can advance research and drive innovation in the Semantic Web.
  18. Metadata and semantics research : 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings (2016) 0.02
    0.0223727 = product of:
      0.0447454 = sum of:
        0.022325827 = weight(_text_:for in 3283) [ClassicSimilarity], result of:
          0.022325827 = score(doc=3283,freq=6.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.25150898 = fieldWeight in 3283, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3283)
        0.022419576 = product of:
          0.04483915 = sum of:
            0.04483915 = weight(_text_:22 in 3283) [ClassicSimilarity], result of:
              0.04483915 = score(doc=3283,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.2708308 = fieldWeight in 3283, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3283)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This book constitutes the refereed proceedings of the 10th Metadata and Semantics Research Conference, MTSR 2016, held in Göttingen, Germany, in November 2016. The 26 full papers and 6 short papers presented were carefully reviewed and selected from 67 submissions. The papers are organized in several sessions and tracks: Digital Libraries, Information Retrieval, Linked and Social Data, Metadata and Semantics for Open Repositories, Research Information Systems and Data Infrastructures, Metadata and Semantics for Agriculture, Food and Environment, Metadata and Semantics for Cultural Collections and Applications, European and National Projects.
  19. Gendt, M. van; Isaac, I.; Meij, L. van der; Schlobach, S.: Semantic Web techniques for multiple views on heterogeneous collections : a case study (2006) 0.02
    0.020656807 = product of:
      0.041313615 = sum of:
        0.022096837 = weight(_text_:for in 2418) [ClassicSimilarity], result of:
          0.022096837 = score(doc=2418,freq=8.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.24892932 = fieldWeight in 2418, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=2418)
        0.019216778 = product of:
          0.038433556 = sum of:
            0.038433556 = weight(_text_:22 in 2418) [ClassicSimilarity], result of:
              0.038433556 = score(doc=2418,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.23214069 = fieldWeight in 2418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2418)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Integrated digital access to multiple collections is a prominent issue for many Cultural Heritage institutions. The metadata describing diverse collections must be interoperable, which requires aligning the controlled vocabularies that are used to annotate objects from these collections. In this paper, we present an experiment where we match the vocabularies of two collections by applying the Knowledge Representation techniques established in recent Semantic Web research. We discuss the steps that are required for such matching, namely formalising the initial resources using Semantic Web languages, and running ontology mapping tools on the resulting representations. In addition, we present a prototype that enables the user to browse the two collections using the obtained alignment while still providing her with the original vocabulary structures.
    Source
    Research and advanced technology for digital libraries : 10th European conference, proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006
  20. Hooland, S. van; Verborgh, R.; Wilde, M. De; Hercher, J.; Mannens, E.; Wa, R.Van de: Evaluating the success of vocabulary reconciliation for cultural heritage collections (2013) 0.02
    0.020656807 = product of:
      0.041313615 = sum of:
        0.022096837 = weight(_text_:for in 662) [ClassicSimilarity], result of:
          0.022096837 = score(doc=662,freq=8.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.24892932 = fieldWeight in 662, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=662)
        0.019216778 = product of:
          0.038433556 = sum of:
            0.038433556 = weight(_text_:22 in 662) [ClassicSimilarity], result of:
              0.038433556 = score(doc=662,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.23214069 = fieldWeight in 662, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=662)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The concept of Linked Data has made its entrance in the cultural heritage sector due to its potential use for the integration of heterogeneous collections and deriving additional value out of existing metadata. However, practitioners and researchers alike need a better understanding of what outcome they can reasonably expect of the reconciliation process between their local metadata and established controlled vocabularies which are already a part of the Linked Data cloud. This paper offers an in-depth analysis of how a locally developed vocabulary can be successfully reconciled with the Library of Congress Subject Headings (LCSH) and the Arts and Architecture Thesaurus (AAT) through the help of a general-purpose tool for interactive data transformation (OpenRefine). Issues negatively affecting the reconciliation process are identified and solutions are proposed in order to derive maximum value from existing metadata and controlled vocabularies in an automated manner.
    Date
    22. 3.2013 19:29:20
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.3, S.464-479

Years

Languages

  • e 219
  • d 14
  • f 1
  • More… Less…

Types

  • a 141
  • el 71
  • m 42
  • s 18
  • n 9
  • x 5
  • r 3
  • More… Less…

Subjects

Classifications