Search (52 results, page 3 of 3)

  • × theme_ss:"Metadaten"
  • × year_i:[2010 TO 2020}
  1. Bellotto, A.; Bekesi, J.: Enriching metadata for a university repository by modelling and infrastructure : a new vocabulary server for Phaidra (2019) 0.00
    0.001153389 = product of:
      0.010380501 = sum of:
        0.010380501 = product of:
          0.020761002 = sum of:
            0.020761002 = weight(_text_:web in 5693) [ClassicSimilarity], result of:
              0.020761002 = score(doc=5693,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.21634221 = fieldWeight in 5693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5693)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    This paper illustrates an initial step towards the 'semantic enrichment' of University of Vienna's Phaidra repository as one of the valuable and up-to-date strategies able to enhance its role and usage. Firstly, a technical report points out the choice made in a local context, i.e. the deployment of the vocabulary server iQvoc instead of the formerly used SKOSMOS, explaining design decisions behind the current tool and additional features that the implementation required. Afterwards, some modelling characteristics of the local LOD controlled vocabulary are described according to SKOS documentation and best practices, highlighting which approaches can be pursued for rendering a LOD KOS available in the Web as well as issues that can be possibly encountered.
  2. Cho, H.; Donovan, A.; Lee, J.H.: Art in an algorithm : a taxonomy for describing video game visual styles (2018) 0.00
    0.001106661 = product of:
      0.009959949 = sum of:
        0.009959949 = product of:
          0.019919898 = sum of:
            0.019919898 = weight(_text_:22 in 4218) [ClassicSimilarity], result of:
              0.019919898 = score(doc=4218,freq=2.0), product of:
                0.10297151 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02940506 = queryNorm
                0.19345059 = fieldWeight in 4218, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4218)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    The discovery and retrieval of video games in library and information systems is, by and large, dependent on a limited set of descriptive metadata. Noticeably missing from this metadata are classifications of visual style-despite the overwhelmingly visual nature of most video games and the interest in visual style among video game users. One explanation for this paucity is the difficulty in eliciting consistent judgements about visual style, likely due to subjective interpretations of terminology and a lack of demonstrable testing for coinciding judgements. This study presents a taxonomy of video game visual styles constructed from the findings of a 22-participant cataloging user study of visual styles. A detailed description of the study, and its value and shortcomings, are presented along with reflections about the challenges of cultivating consensus about visual style in video games. The high degree of overall agreement in the user study demonstrates the potential value of a descriptor like visual style and the use of a cataloging study in developing visual style taxonomies. The resulting visual style taxonomy, the methods and analysis described herein may help improve the organization and retrieval of video games and possibly other visual materials like graphic designs, illustrations, and animations.
  3. Miller, S.J.: Metadata for digital collections : a how-to-do-it manual (2011) 0.00
    0.0010874257 = product of:
      0.009786831 = sum of:
        0.009786831 = product of:
          0.019573662 = sum of:
            0.019573662 = weight(_text_:web in 4911) [ClassicSimilarity], result of:
              0.019573662 = score(doc=4911,freq=4.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.2039694 = fieldWeight in 4911, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4911)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    More and more libraries, archives, and museums are creating online collections of digitized resources. Where can those charged with organizing these new collections turn for guidance on the actual practice of metadata design and creation? "Metadata for Digital Collections: A How-to-do-it Manual" is suitable for libraries, archives, and museums. This practical, hands-on volume will make it easy for readers to acquire the knowledge and skills they need, whether they use the book on the job or in a classroom. Author Steven Miller introduces readers to fundamental concepts and practices in a style accessible to beginners and LIS students, as well as experienced practitioners with little metadata training. He also takes account of the widespread use of digital collection management systems such as CONTENTdm. Rather than surveying a large number of metadata schemes, Miller covers only three of the schemes most commonly used in general digital resource description, namely, Dublin Core, MODS, and VRA. By limiting himself, Miller is able to address the chosen schemes in greater depth. He is also able to include numerous practical examples that clarify common application issues and challenges. He provides practical guidance on applying each of the Dublin Core elements, taking special care to clarify those most commonly misunderstood. The book includes a step-by-step guide on how to design and document a metadata scheme for local institutional needs and for specific digital collection projects. The text also serves well as an introduction to broader metadata topics, including XML encoding, mapping between different schemes, metadata interoperability and record sharing, OAI harvesting, and the emerging environment of Linked Data and the Semantic Web, explaining their relevance to current practitioners and students. Each chapter offers a set of exercises, with suggestions for instructors. A companion website includes additional practical and reference resources.
    Content
    Introduction to metadata for digital collections -- Introduction to resource description and Dublin Core -- Resource identification and responsibility elements -- Resource content and relationship elements -- Controlled vocabularies for improved resource discovery -- XML-encoded metadata -- MODS : the Metadata Object Description Schema -- VRA Core : the Visual Resources Association Core Categories -- Metadata interoperability, shareability, and quality -- Designing and documenting a metadata scheme -- Metadata, linked data, and the Semantic Web.
  4. Syn, S.Y.; Spring, M.B.: Finding subject terms for classificatory metadata from user-generated social tags (2013) 0.00
    9.611576E-4 = product of:
      0.008650418 = sum of:
        0.008650418 = product of:
          0.017300837 = sum of:
            0.017300837 = weight(_text_:web in 745) [ClassicSimilarity], result of:
              0.017300837 = score(doc=745,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.18028519 = fieldWeight in 745, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=745)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    With the increasing popularity of social tagging systems, the potential for using social tags as a source of metadata is being explored. Social tagging systems can simplify the involvement of a large number of users and improve the metadata-generation process. Current research is exploring social tagging systems as a mechanism to allow nonprofessional catalogers to participate in metadata generation. Because social tags are not from controlled vocabularies, there are issues that have to be addressed in finding quality terms to represent the content of a resource. This research explores ways to obtain a set of tags representing the resource from the tags provided by users. Two metrics are introduced. Annotation Dominance (AD) is a measure of the extent to which a tag term is agreed to by users. Cross Resources Annotation Discrimination (CRAD) is a measure of a tag's potential to classify a collection. It is designed to remove tags that are used too broadly or narrowly. Using the proposed measurements, the research selects important tags (meta-terms) and removes meaningless ones (tag noise) from the tags provided by users. To evaluate the proposed approach to find classificatory metadata candidates, we rely on expert users' relevance judgments comparing suggested tag terms and expert metadata terms. The results suggest that processing of user tags using the two measurements successfully identifies the terms that represent the topic categories of web resource content. The suggested tag terms can be further examined in various usages as semantic metadata for the resources.
  5. Peters, I.; Stock, W.G.: Power tags in information retrieval (2010) 0.00
    9.611576E-4 = product of:
      0.008650418 = sum of:
        0.008650418 = product of:
          0.017300837 = sum of:
            0.017300837 = weight(_text_:web in 865) [ClassicSimilarity], result of:
              0.017300837 = score(doc=865,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.18028519 = fieldWeight in 865, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=865)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    Purpose - Many Web 2.0 services (including Library 2.0 catalogs) make use of folksonomies. The purpose of this paper is to cut off all tags in the long tail of a document-specific tag distribution. The remaining tags at the beginning of a tag distribution are considered power tags and form a new, additional search option in information retrieval systems. Design/methodology/approach - In a theoretical approach the paper discusses document-specific tag distributions (power law and inverse-logistic shape), the development of such distributions (Yule-Simon process and shuffling theory) and introduces search tags (besides the well-known index tags) as a possibility for generating tag distributions. Findings - Search tags are compatible with broad and narrow folksonomies and with all knowledge organization systems (e.g. classification systems and thesauri), while index tags are only applicable in broad folksonomies. Based on these findings, the paper presents a sketch of an algorithm for mining and processing power tags in information retrieval systems. Research limitations/implications - This conceptual approach is in need of empirical evaluation in a concrete retrieval system. Practical implications - Power tags are a new search option for retrieval systems to limit the amount of hits. Originality/value - The paper introduces power tags as a means for enhancing the precision of search results in information retrieval systems that apply folksonomies, e.g. catalogs in Library 2.0environments.
  6. Metadata and semantics research : 8th Research Conference, MTSR 2014, Karlsruhe, Germany, November 27-29, 2014, Proceedings (2014) 0.00
    9.611576E-4 = product of:
      0.008650418 = sum of:
        0.008650418 = product of:
          0.017300837 = sum of:
            0.017300837 = weight(_text_:web in 2192) [ClassicSimilarity], result of:
              0.017300837 = score(doc=2192,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.18028519 = fieldWeight in 2192, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2192)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Theme
    Semantic Web
  7. Social tagging in a linked data environment. Edited by Diane Rasmussen Pennington and Louise F. Spiteri. London, UK: Facet Publishing, 2018. 240 pp. £74.95 (paperback). (ISBN 9781783303380) (2019) 0.00
    9.611576E-4 = product of:
      0.008650418 = sum of:
        0.008650418 = product of:
          0.017300837 = sum of:
            0.017300837 = weight(_text_:web in 101) [ClassicSimilarity], result of:
              0.017300837 = score(doc=101,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.18028519 = fieldWeight in 101, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=101)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    Social tagging, hashtags, and geotags are used across a variety of platforms (Twitter, Facebook, Tumblr, WordPress, Instagram) in different countries and cultures. This book, representing researchers and practitioners across different information professions, explores how social tags can link content across a variety of environments. Most studies of social tagging have tended to focus on applications like library catalogs, blogs, and social bookmarking sites. This book, in setting out a theoretical background and the use of a series of case studies, explores the role of hashtags as a form of linked data?without the complex implementation of RDF and other Semantic Web technologies.
  8. Willis, C.; Greenberg, J.; White, H.: Analysis and synthesis of metadata goals for scientific data (2012) 0.00
    8.853288E-4 = product of:
      0.007967959 = sum of:
        0.007967959 = product of:
          0.015935918 = sum of:
            0.015935918 = weight(_text_:22 in 367) [ClassicSimilarity], result of:
              0.015935918 = score(doc=367,freq=2.0), product of:
                0.10297151 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02940506 = queryNorm
                0.15476047 = fieldWeight in 367, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=367)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    The proliferation of discipline-specific metadata schemes contributes to artificial barriers that can impede interdisciplinary and transdisciplinary research. The authors considered this problem by examining the domains, objectives, and architectures of nine metadata schemes used to document scientific data in the physical, life, and social sciences. They used a mixed-methods content analysis and Greenberg's () metadata objectives, principles, domains, and architectural layout (MODAL) framework, and derived 22 metadata-related goals from textual content describing each metadata scheme. Relationships are identified between the domains (e.g., scientific discipline and type of data) and the categories of scheme objectives. For each strong correlation (>0.6), a Fisher's exact test for nonparametric data was used to determine significance (p < .05). Significant relationships were found between the domains and objectives of the schemes. Schemes describing observational data are more likely to have "scheme harmonization" (compatibility and interoperability with related schemes) as an objective; schemes with the objective "abstraction" (a conceptual model exists separate from the technical implementation) also have the objective "sufficiency" (the scheme defines a minimal amount of information to meet the needs of the community); and schemes with the objective "data publication" do not have the objective "element refinement." The analysis indicates that many metadata-driven goals expressed by communities are independent of scientific discipline or the type of data, although they are constrained by historical community practices and workflows as well as the technological environment at the time of scheme creation. The analysis reveals 11 fundamental metadata goals for metadata documenting scientific data in support of sharing research data across disciplines and domains. The authors report these results and highlight the need for more metadata-related research, particularly in the context of recent funding agency policy changes.
  9. Khoo, M.J.; Ahn, J.-w.; Binding, C.; Jones, H.J.; Lin, X.; Massam, D.; Tudhope, D.: Augmenting Dublin Core digital library metadata with Dewey Decimal Classification (2015) 0.00
    7.6892605E-4 = product of:
      0.0069203344 = sum of:
        0.0069203344 = product of:
          0.013840669 = sum of:
            0.013840669 = weight(_text_:web in 2320) [ClassicSimilarity], result of:
              0.013840669 = score(doc=2320,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.14422815 = fieldWeight in 2320, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2320)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    Purpose - The purpose of this paper is to describe a new approach to a well-known problem for digital libraries, how to search across multiple unrelated libraries with a single query. Design/methodology/approach - The approach involves creating new Dewey Decimal Classification terms and numbers from existing Dublin Core records. In total, 263,550 records were harvested from three digital libraries. Weighted key terms were extracted from the title, description and subject fields of each record. Ranked DDC classes were automatically generated from these key terms by considering DDC hierarchies via a series of filtering and aggregation stages. A mean reciprocal ranking evaluation compared a sample of 49 generated classes against DDC classes created by a trained librarian for the same records. Findings - The best results combined weighted key terms from the title, description and subject fields. Performance declines with increased specificity of DDC level. The results compare favorably with similar studies. Research limitations/implications - The metadata harvest required manual intervention and the evaluation was resource intensive. Future research will look at evaluation methodologies that take account of issues of consistency and ecological validity. Practical implications - The method does not require training data and is easily scalable. The pipeline can be customized for individual use cases, for example, recall or precision enhancing. Social implications - The approach can provide centralized access to information from multiple domains currently provided by individual digital libraries. Originality/value - The approach addresses metadata normalization in the context of web resources. The automatic classification approach accounts for matches within hierarchies, aggregating lower level matches to broader parents and thus approximates the practices of a human cataloger.
  10. Alemu, G.: ¬A theory of metadata enriching and filtering (2016) 0.00
    7.6892605E-4 = product of:
      0.0069203344 = sum of:
        0.0069203344 = product of:
          0.013840669 = sum of:
            0.013840669 = weight(_text_:web in 5068) [ClassicSimilarity], result of:
              0.013840669 = score(doc=5068,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.14422815 = fieldWeight in 5068, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5068)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    This paper presents a new theory of metadata enriching and filtering. The theory emerged from a rigorous grounded theory data analysis of 57 in-depth interviews with metadata experts, library and information science researchers, librarians as well as academic library users (G. Alemu, A Theory of Digital Library Metadata: The Emergence of Enriching and Filtering, University of Portsmouth PhD thesis, Portsmouth, 2014). Partly due to the novelty of Web 2.0 approaches and mainly due to the absence of foundational theories to underpin socially constructed metadata approaches, this research adapted a social constructivist philosophical approach and a constructivist grounded theory method (K.?Charmaz, Constructing Grounded Theory: A Practical Guide through Qualitative Analysis, SAGE Publications, London, 2006). The theory espouses the importance of enriching information objects with descriptions pertaining to the about-ness of information objects. Such richness and diversity of descriptions, it is argued, could chiefly be achieved by involving users in the metadata creation process. The theory includes four overarching metadata principles - metadata enriching, linking, openness and filtering. The theory proposes a mixed metadata approach where metadata experts provide the requisite basic descriptive metadata, structure and interoperability (a priori metadata) while users continually enrich it with their own interpretations (post-hoc metadata). Enriched metadata is inter- and cross-linked (the principle of linking), made openly accessible (the principle of openness) and presented (the principle of filtering) according to user needs. It is argued that enriched, interlinked and open metadata effectively rises and scales to the challenges presented by the growing digital collections and changing user expectations. This metadata approach allows users to pro-actively engage in co-creating metadata, hence enhancing the findability, discoverability and subsequent usage of information resources. This paper concludes by indicating the current challenges and opportunities to implement the theory of metadata enriching and filtering.
  11. Hooland, S. van; Verborgh, R.: Linked data for Lilibraries, archives and museums : how to clean, link, and publish your metadata (2014) 0.00
    7.6892605E-4 = product of:
      0.0069203344 = sum of:
        0.0069203344 = product of:
          0.013840669 = sum of:
            0.013840669 = weight(_text_:web in 5153) [ClassicSimilarity], result of:
              0.013840669 = score(doc=5153,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.14422815 = fieldWeight in 5153, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5153)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    This highly practical handbook teaches you how to unlock the value of your existing metadata through cleaning, reconciliation, enrichment and linking and how to streamline the process of new metadata creation. Libraries, archives and museums are facing up to the challenge of providing access to fast growing collections whilst managing cuts to budgets. Key to this is the creation, linking and publishing of good quality metadata as Linked Data that will allow their collections to be discovered, accessed and disseminated in a sustainable manner. This highly practical handbook teaches you how to unlock the value of your existing metadata through cleaning, reconciliation, enrichment and linking and how to streamline the process of new metadata creation. Metadata experts Seth van Hooland and Ruben Verborgh introduce the key concepts of metadata standards and Linked Data and how they can be practically applied to existing metadata, giving readers the tools and understanding to achieve maximum results with limited resources. Readers will learn how to critically assess and use (semi-)automated methods of managing metadata through hands-on exercises within the book and on the accompanying website. Each chapter is built around a case study from institutions around the world, demonstrating how freely available tools are being successfully used in different metadata contexts. This handbook delivers the necessary conceptual and practical understanding to empower practitioners to make the right decisions when making their organisations resources accessible on the Web. Key topics include, the value of metadata; metadata creation - architecture, data models and standards; metadata cleaning; metadata reconciliation; metadata enrichment through Linked Data and named-entity recognition; importing and exporting metadata; ensuring a sustainable publishing model. This will be an invaluable guide for metadata practitioners and researchers within all cultural heritage contexts, from library cataloguers and archivists to museum curatorial staff. It will also be of interest to students and academics within information science and digital humanities fields. IT managers with responsibility for information systems, as well as strategy heads and budget holders, at cultural heritage organisations, will find this a valuable decision-making aid.
  12. Martins, S. de Castro: Modelo conceitual de ecossistema semântico de informações corporativas para aplicação em objetos multimídia (2019) 0.00
    7.6892605E-4 = product of:
      0.0069203344 = sum of:
        0.0069203344 = product of:
          0.013840669 = sum of:
            0.013840669 = weight(_text_:web in 117) [ClassicSimilarity], result of:
              0.013840669 = score(doc=117,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.14422815 = fieldWeight in 117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=117)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    Information management in corporate environments is a growing problem as companies' information assets grow and their need to use them in their operations. Several management models have been practiced with application on the most diverse fronts, practices that integrate the so-called Enterprise Content Management. This study proposes a conceptual model of semantic corporate information ecosystem, based on the Universal Document Model proposed by Dagobert Soergel. It focuses on unstructured information objects, especially multimedia, increasingly used in corporate environments, adding semantics and expanding their recovery potential in the composition and reuse of dynamic documents on demand. The proposed model considers stable elements in the organizational environment, such as actors, processes, business metadata and information objects, as well as some basic infrastructures of the corporate information environment. The main objective is to establish a conceptual model that adds semantic intelligence to information assets, leveraging pre-existing infrastructure in organizations, integrating and relating objects to other objects, actors and business processes. The approach methodology considered the state of the art of Information Organization, Representation and Retrieval, Organizational Content Management and Semantic Web technologies, in the scientific literature, as bases for the establishment of an integrative conceptual model. Therefore, the research will be qualitative and exploratory. The predicted steps of the model are: Environment, Data Type and Source Definition, Data Distillation, Metadata Enrichment, and Storage. As a result, in theoretical terms the extended model allows to process heterogeneous and unstructured data according to the established cut-outs and through the processes listed above, allowing value creation in the composition of dynamic information objects, with semantic aggregations to metadata.

Languages

  • e 45
  • d 6
  • pt 1
  • More… Less…

Types

  • a 36
  • el 11
  • m 11
  • s 6
  • x 2
  • More… Less…

Subjects