Search (243 results, page 2 of 13)

  • × type_ss:"el"
  • × year_i:[2020 TO 2030}
  1. Kahlawi, A,: ¬An ontology driven ESCO LOD quality enhancement (2020) 0.00
    0.0030444188 = product of:
      0.0060888375 = sum of:
        0.0060888375 = product of:
          0.012177675 = sum of:
            0.012177675 = weight(_text_:a in 5959) [ClassicSimilarity], result of:
              0.012177675 = score(doc=5959,freq=18.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.22931081 = fieldWeight in 5959, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5959)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The labor market is a system that is complex and difficult to manage. To overcome this challenge, the European Union has launched the ESCO project which is a language that aims to describe this labor market. In order to support the spread of this project, its dataset was presented as linked open data (LOD). Since LOD is usable and reusable, a set of conditions have to be met. First, LOD must be feasible and high quality. In addition, it must provide the user with the right answers, and it has to be built according to a clear and correct structure. This study investigates the LOD of ESCO, focusing on data quality and data structure. The former is evaluated through applying a set of SPARQL queries. This provides solutions to improve its quality via a set of rules built in first order logic. This process was conducted based on a new proposed ESCO ontology.
    Type
    a
  2. Machado, L.; Martínez-Ávila, D.; Barcellos Almeida, M.; Borges, M.M.: Towards a moderate realistic foundation for ontological knowledge organization systems : the question of the naturalness of classifications (2023) 0.00
    0.0030444188 = product of:
      0.0060888375 = sum of:
        0.0060888375 = product of:
          0.012177675 = sum of:
            0.012177675 = weight(_text_:a in 894) [ClassicSimilarity], result of:
              0.012177675 = score(doc=894,freq=18.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.22931081 = fieldWeight in 894, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=894)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Several authors emphasize the need for a change in classification theory due to the influence of a dogmatic and monistic ontology supported by an outdated essentialism. These claims tend to focus on the fallibility of knowledge, the need for a pluralistic view, and the theoretical burden of observations. Regardless of the legitimacy of these concerns, there is the risk, when not moderate, to fall into the opposite relativistic extreme. Based on a narrative review of the literature, we aim to reflectively discuss the theoretical foundations that can serve as a basis for a realist position supporting pluralistic ontological classifications. The goal is to show that, against rather conventional solutions, objective scientific-based approaches to natural classifications are presented to be viable, allowing a proper distinction between ontological and taxonomic questions. Supported by critical scientific realism, we consider that such an approach is suitable for the development of ontological Knowledge Organization Systems (KOS). We believe that ontological perspectivism can provide the necessary adaptation to the different granularities of reality.
    Type
    a
  3. ChatGPT : Optimizing language models for dalogue (2022) 0.00
    0.0030255679 = product of:
      0.0060511357 = sum of:
        0.0060511357 = product of:
          0.012102271 = sum of:
            0.012102271 = weight(_text_:a in 836) [ClassicSimilarity], result of:
              0.012102271 = score(doc=836,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.22789092 = fieldWeight in 836, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=836)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We've trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT is a sibling model to InstructGPT, which is trained to follow an instruction in a prompt and provide a detailed response.
  4. Lund, B.D.: ¬A chat with ChatGPT : how will AI impact scholarly publishing? (2022) 0.00
    0.0030255679 = product of:
      0.0060511357 = sum of:
        0.0060511357 = product of:
          0.012102271 = sum of:
            0.012102271 = weight(_text_:a in 850) [ClassicSimilarity], result of:
              0.012102271 = score(doc=850,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.22789092 = fieldWeight in 850, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=850)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This is a short project that serves as an inspiration for a forthcoming paper, which will explore the technical side of ChatGPT and the ethical issues it presents for academic researchers, which will result in a peer-reviewed publication. This demonstrates that capacities of ChatGPT as a "chatbot" that is far more advanced than many alternatives available today and may even be able to be used to draft entire academic manuscripts for researchers. ChatGPT is available via https://chat.openai.com/chat.
  5. Shiri, A.; Kelly, E.J.; Kenfield, A.; Woolcott, L.; Masood, K.; Muglia, C.; Thompson, S.: ¬A faceted conceptualization of digital object reuse in digital repositories (2020) 0.00
    0.0029000505 = product of:
      0.005800101 = sum of:
        0.005800101 = product of:
          0.011600202 = sum of:
            0.011600202 = weight(_text_:a in 48) [ClassicSimilarity], result of:
              0.011600202 = score(doc=48,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.21843673 = fieldWeight in 48, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=48)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this paper, we provide an introduction to the concept of digital object reuse and its various connotations in the context of current digital libraries, archives, and repositories. We will then propose a faceted categorization of the various types, contexts, and cases for digital object reuse in order to facilitate understanding and communication and to provide a conceptual framework for the assessment of digital object reuse by various cultural heritage and cultural memory organizations.
    Type
    a
  6. Hausser, R.: Grammatical disambiguation : the linear complexity hypothesis for natural language (2020) 0.00
    0.0028703054 = product of:
      0.005740611 = sum of:
        0.005740611 = product of:
          0.011481222 = sum of:
            0.011481222 = weight(_text_:a in 22) [ClassicSimilarity], result of:
              0.011481222 = score(doc=22,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2161963 = fieldWeight in 22, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=22)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    DBS uses a strictly time-linear derivation order. Therefore the basic computational complexity degree of DBS is linear time. The only way to increase DBS complexity above linear is repeating ambiguity. In natural language, however, repeating ambiguity is prevented by grammatical disambiguation. A classic example of a grammatical ambiguity is the 'garden path' sentence The horse raced by the barn fell. The continuation horse+raced introduces an ambiguity between horse which raced and horse which was raced, leading to two parallel derivation strands up to The horse raced by the barn. Depending on whether the continuation is interpunctuation or a verb, they are grammatically disambiguated, resulting in unambiguous output. A repeated ambiguity occurs in The man who loves the woman who feeds Lucy who Peter loves., with who serving as subject or as object. These readings are grammatically disambiguated by continuing after who with a verb or a noun.
    Type
    a
  7. Hausser, R.: Language and nonlanguage cognition (2021) 0.00
    0.0028703054 = product of:
      0.005740611 = sum of:
        0.005740611 = product of:
          0.011481222 = sum of:
            0.011481222 = weight(_text_:a in 255) [ClassicSimilarity], result of:
              0.011481222 = score(doc=255,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2161963 = fieldWeight in 255, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=255)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A basic distinction in agent-based data-driven Database Semantics (DBS) is between language and nonlanguage cognition. Language cognition transfers content between agents by means of raw data. Nonlanguage cognition maps between content and raw data inside the focus agent. {\it Recognition} applies a concept type to raw data, resulting in a concept token. In language recognition, the focus agent (hearer) takes raw language-data (surfaces) produced by another agent (speaker) as input, while nonlanguage recognition takes raw nonlanguage-data as input. In either case, the output is a content which is stored in the agent's onboard short term memory. {\it Action} adapts a concept type to a purpose, resulting in a token. In language action, the focus agent (speaker) produces language-dependent surfaces for another agent (hearer), while nonlanguage action produces intentions for a nonlanguage purpose. In either case, the output is raw action data. As long as the procedural implementation of place holder values works properly, it is compatible with the DBS requirement of input-output equivalence between the natural prototype and the artificial reconstruction.
  8. Roose, K.: ¬The brilliance and weirdness of ChatGPT (2022) 0.00
    0.0028703054 = product of:
      0.005740611 = sum of:
        0.005740611 = product of:
          0.011481222 = sum of:
            0.011481222 = weight(_text_:a in 853) [ClassicSimilarity], result of:
              0.011481222 = score(doc=853,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2161963 = fieldWeight in 853, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=853)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A new chatbot from OpenAI is inspiring awe, fear, stunts and attempts to circumvent its guardrails.
    Type
    a
  9. Chessum, K.; Haiming, L.; Frommholz, I.: ¬A study of search user interface design based on Hofstede's six cultural dimensions (2022) 0.00
    0.0028703054 = product of:
      0.005740611 = sum of:
        0.005740611 = product of:
          0.011481222 = sum of:
            0.011481222 = weight(_text_:a in 856) [ClassicSimilarity], result of:
              0.011481222 = score(doc=856,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2161963 = fieldWeight in 856, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=856)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  10. Westphalen, A. von: Künstliche Intelligenz mit lebensgefährlichen Nebenwirkungen (2023) 0.00
    0.0028703054 = product of:
      0.005740611 = sum of:
        0.005740611 = product of:
          0.011481222 = sum of:
            0.011481222 = weight(_text_:a in 879) [ClassicSimilarity], result of:
              0.011481222 = score(doc=879,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2161963 = fieldWeight in 879, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=879)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  11. Jansen, B.; Browne, G.M.: Navigating information spaces : index / mind map / topic map? (2021) 0.00
    0.00270615 = product of:
      0.0054123 = sum of:
        0.0054123 = product of:
          0.0108246 = sum of:
            0.0108246 = weight(_text_:a in 436) [ClassicSimilarity], result of:
              0.0108246 = score(doc=436,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20383182 = fieldWeight in 436, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=436)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper discusses the use of wiki technology to provide a navigation structure for a collection of newspaper clippings. We overview the architecture of the wiki, discuss the navigation structure and pose the question: is the navigation structure an index, and if so, what type, or is it just a linkage structure or topic map. Does such a distinction really matter? Are these definitions in reality function based?
  12. Franke, T.; Zoubir, M.: Technology for the people? : humanity as a compass for the digital transformation (2020) 0.00
    0.0026849252 = product of:
      0.0053698504 = sum of:
        0.0053698504 = product of:
          0.010739701 = sum of:
            0.010739701 = weight(_text_:a in 830) [ClassicSimilarity], result of:
              0.010739701 = score(doc=830,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20223314 = fieldWeight in 830, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=830)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    How do we define what technology is for humans? One perspective suggests that it is a tool enabling the use of valuable resources such as time, food, health and mobility. One could say that in its cultural history, humanity has developed a wide range of artefacts which enable the effective utilisation of these resources for the fulfilment of physiological, but also psychological, needs. This paper explores how this perspective may be used as an orientation for future technological innovation. Hence, the goal is to provide an accessible discussion of such a psychological perspective on technology development that could pave the way towards a truly human-centred digital transformation.
    Content
    Vgl.: https://www.wirtschaftsdienst.eu/inhalt/jahr/2020/heft/13/beitrag/technology-for-the-people-humanity-as-a-compass-for-the-digital-transformation.html. DOI: 10.1007/s10273-020-2609-3.
    Type
    a
  13. Collard, J.; Paiva, V. de; Fong, B.; Subrahmanian, E.: Extracting mathematical concepts from text (2022) 0.00
    0.0026473717 = product of:
      0.0052947435 = sum of:
        0.0052947435 = product of:
          0.010589487 = sum of:
            0.010589487 = weight(_text_:a in 668) [ClassicSimilarity], result of:
              0.010589487 = score(doc=668,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.19940455 = fieldWeight in 668, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=668)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We investigate different systems for extracting mathematical entities from English texts in the mathematical field of category theory as a first step for constructing a mathematical knowledge graph. We consider four different term extractors and compare their results. This small experiment showcases some of the issues with the construction and evaluation of terms extracted from noisy domain text. We also make available two open corpora in research mathematics, in particular in category theory: a small corpus of 755 abstracts from the journal TAC (3188 sentences), and a larger corpus from the nLab community wiki (15,000 sentences).
    Type
    a
  14. Hofstadter, D.: Artificial neural networks today are not conscious, according to Douglas Hofstadter (2022) 0.00
    0.0025115174 = product of:
      0.0050230348 = sum of:
        0.0050230348 = product of:
          0.0100460695 = sum of:
            0.0100460695 = weight(_text_:a in 626) [ClassicSimilarity], result of:
              0.0100460695 = score(doc=626,freq=36.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18917176 = fieldWeight in 626, product of:
                  6.0 = tf(freq=36.0), with freq of:
                    36.0 = termFreq=36.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=626)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    D&D: Why does President Obama not have a prime number of friends? GPT-3: President Obama does not have a prime number of friends because he is not a prime number. D&D: How many pieces of sound are there in a typical cumulonimbus cloud? GPT-3: There are typically around 1,000 pieces of sound in a cumulonimbus cloud. D&D: How many cumulus clouds are there in a mile-high vase? GPT-3: There are about 1,500 cumulus clouds in a mile-high vase. D&D: How many parts will a violin break into if a jelly bean is dropped on it? GPT-3: A violin will break into four parts if a jelly bean is dropped on it. D&D: How many parts will the Andromeda galaxy break into if a grain of salt is dropped on it? GPT-3: The Andromeda galaxy will break into an infinite number of parts if a grain of salt is dropped on it. D&D: How many parts will a violin break into if a jelly bean is dropped on it? GPT-3: A violin will break into four parts if a jelly bean is dropped on it.
    Type
    a
  15. Broughton, V.: Faceted classification in support of diversity : the role of concepts and terms in representing religion (2020) 0.00
    0.0024857575 = product of:
      0.004971515 = sum of:
        0.004971515 = product of:
          0.00994303 = sum of:
            0.00994303 = weight(_text_:a in 5992) [ClassicSimilarity], result of:
              0.00994303 = score(doc=5992,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18723148 = fieldWeight in 5992, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5992)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The paper examines the development of facet analysis as a methodology and the role it plays in building classifications and other knowledge-organization tools. The use of categorical analysis in areas other than library and information science is also considered. The suitability of the faceted approach for humanities documentation is explored through a critical description of the FATKS (Facet Analytical Theory in Managing Knowledge Structure for Humanities) project carried out at University College London. This research focused on building a conceptual model for the subject of religion together with a relational database and search-and-browse interfaces that would support some degree of automatic classification. The paper concludes with a discussion of the differences between the conceptual model and the vocabulary used to populate it, and how, in the case of religion, the choice of terminology can create an apparent bias in the system.
    Type
    a
  16. Lund, B.D.: ¬A brief review of ChatGPT : its value and the underlying GPT technology (2023) 0.00
    0.0024857575 = product of:
      0.004971515 = sum of:
        0.004971515 = product of:
          0.00994303 = sum of:
            0.00994303 = weight(_text_:a in 873) [ClassicSimilarity], result of:
              0.00994303 = score(doc=873,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18723148 = fieldWeight in 873, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=873)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this review paper, ChatGPT, a public tool developed by OpenAI that utilizes GPT technology to fulfill a range of text-based requests is examined. ChatGPT is a sophisticated chatbot capable of understanding and interpreting user requests, generating appropriate responses in nearly natural human language, and completing advanced tasks such as writing thank you letters and addressing productivity issues. The details of how ChatGPT works, as well as the potential impacts of this technology on various industries, are discussed. The concept of Generative Pre-Trained Transformer (GPT), the language model on which ChatGPT is based, is also explored, as well as the process of unsupervised pretraining and supervised fine-tuning that is used to refine the GPT algorithm. A letter written by ChatGPT to a colleague from Iran is presented as an example of the chatbot's capabilities.
  17. Unzicker, A.: Coronavirus : das Versagen der alternativen Medien (2020) 0.00
    0.0023919214 = product of:
      0.0047838427 = sum of:
        0.0047838427 = product of:
          0.009567685 = sum of:
            0.009567685 = weight(_text_:a in 4711) [ClassicSimilarity], result of:
              0.009567685 = score(doc=4711,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18016359 = fieldWeight in 4711, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4711)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  18. Favato Barcelos, P.P.; Sales, T.P.; Fumagalli, M.; Guizzardi, G.; Valle Sousa, I.; Fonseca, C.M.; Romanenko, E.; Kritz, J.: ¬A FAIR model catalog for ontology-driven conceptual modeling research (2022) 0.00
    0.0023919214 = product of:
      0.0047838427 = sum of:
        0.0047838427 = product of:
          0.009567685 = sum of:
            0.009567685 = weight(_text_:a in 756) [ClassicSimilarity], result of:
              0.009567685 = score(doc=756,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18016359 = fieldWeight in 756, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=756)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Conceptual models are artifacts representing conceptualizations of particular domains. Hence, multi-domain model catalogs serve as empirical sources of knowledge and insights about specific domains, about the use of a modeling language's constructs, as well as about the patterns and anti-patterns recurrent in the models of that language crosscutting different domains. However, to support domain and language learning, model reuse, knowledge discovery for humans, and reliable automated processing and analysis by machines, these catalogs must be built following generally accepted quality requirements for scientific data management. Especially, all scientific (meta)data-including models-should be created using the FAIR principles (Findability, Accessibility, Interoperability, and Reusability). In this paper, we report on the construction of a FAIR model catalog for Ontology-Driven Conceptual Modeling research, a trending paradigm lying at the intersection of conceptual modeling and ontology engineering in which the Unified Foundational Ontology (UFO) and OntoUML emerged among the most adopted technologies. In this initial release, the catalog includes over a hundred models, developed in a variety of contexts and domains. The paper also discusses the research implications for (ontology-driven) conceptual modeling of such a resource.
    Type
    a
  19. Wolf, S.: Automating authority control processes (2020) 0.00
    0.0023678814 = product of:
      0.0047357627 = sum of:
        0.0047357627 = product of:
          0.009471525 = sum of:
            0.009471525 = weight(_text_:a in 5680) [ClassicSimilarity], result of:
              0.009471525 = score(doc=5680,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17835285 = fieldWeight in 5680, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5680)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Authority control is an important part of cataloging since it helps provide consistent access to names, titles, subjects, and genre/forms. There are a variety of methods for providing authority control, ranging from manual, time-consuming processes to automated processes. However, the automated processes often seem out of reach for small libraries when it comes to using a pricey vendor or expert cataloger. This paper introduces ideas on how to handle authority control using a variety of tools, both paid and free. The author describes how their library handles authority control; compares vendors and programs that can be used to provide varying levels of authority control; and demonstrates authority control using MarcEdit.
    Type
    a
  20. Lynch, J.D.; Gibson, J.; Han, M.-J.: Analyzing and normalizing type metadata for a large aggregated digital library (2020) 0.00
    0.0023678814 = product of:
      0.0047357627 = sum of:
        0.0047357627 = product of:
          0.009471525 = sum of:
            0.009471525 = weight(_text_:a in 5720) [ClassicSimilarity], result of:
              0.009471525 = score(doc=5720,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17835285 = fieldWeight in 5720, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5720)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Illinois Digital Heritage Hub (IDHH) gathers and enhances metadata from contributing institutions around the state of Illinois and provides this metadata to th Digital Public Library of America (DPLA) for greater access. The IDHH helps contributors shape their metadata to the standards recommended and required by the DPLA in part by analyzing and enhancing aggregated metadata. In late 2018, the IDHH undertook a project to address a particularly problematic field, Type metadata. This paper walks through the project, detailing the process of gathering and analyzing metadata using the DPLA API and OpenRefine, data remediation through XSL transformations in conjunction with local improvements by contributing institutions, and the DPLA ingestion system's quality controls.
    Type
    a

Languages

  • d 180
  • e 61
  • sp 1
  • More… Less…

Types

  • a 222
  • p 9
  • More… Less…