Search (12 results, page 1 of 1)

  • × theme_ss:"Computerlinguistik"
  • × type_ss:"el"
  1. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.02
    0.022289194 = product of:
      0.04457839 = sum of:
        0.04457839 = product of:
          0.08915678 = sum of:
            0.08915678 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.08915678 = score(doc=4888,freq=2.0), product of:
                0.19203177 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.054837555 = queryNorm
                0.46428138 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 3.2013 14:56:22
  2. Shen, M.; Liu, D.-R.; Huang, Y.-S.: Extracting semantic relations to enrich domain ontologies (2012) 0.02
    0.020200431 = product of:
      0.040400863 = sum of:
        0.040400863 = product of:
          0.080801725 = sum of:
            0.080801725 = weight(_text_:work in 267) [ClassicSimilarity], result of:
              0.080801725 = score(doc=267,freq=4.0), product of:
                0.20127523 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.054837555 = queryNorm
                0.40144894 = fieldWeight in 267, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=267)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Domain ontologies facilitate the organization, sharing and reuse of domain knowledge, and enable various vertical domain applications to operate successfully. Most methods for automatically constructing ontologies focus on taxonomic relations, such as is-kind-of and is- part-of relations. However, much of the domain-specific semantics is ignored. This work proposes a semi-unsupervised approach for extracting semantic relations from domain-specific text documents. The approach effectively utilizes text mining and existing taxonomic relations in domain ontologies to discover candidate keywords that can represent semantic relations. A preliminary experiment on the natural science domain (Taiwan K9 education) indicates that the proposed method yields valuable recommendations. This work enriches domain ontologies by adding distilled semantics.
  3. Lezius, W.: Morphy - Morphologie und Tagging für das Deutsche (2013) 0.01
    0.014859463 = product of:
      0.029718926 = sum of:
        0.029718926 = product of:
          0.059437852 = sum of:
            0.059437852 = weight(_text_:22 in 1490) [ClassicSimilarity], result of:
              0.059437852 = score(doc=1490,freq=2.0), product of:
                0.19203177 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.054837555 = queryNorm
                0.30952093 = fieldWeight in 1490, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1490)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2015 9:30:24
  4. Bager, J.: ¬Die Text-KI ChatGPT schreibt Fachtexte, Prosa, Gedichte und Programmcode (2023) 0.01
    0.014859463 = product of:
      0.029718926 = sum of:
        0.029718926 = product of:
          0.059437852 = sum of:
            0.059437852 = weight(_text_:22 in 835) [ClassicSimilarity], result of:
              0.059437852 = score(doc=835,freq=2.0), product of:
                0.19203177 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.054837555 = queryNorm
                0.30952093 = fieldWeight in 835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=835)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    29.12.2022 18:22:55
  5. Rieger, F.: Lügende Computer (2023) 0.01
    0.014859463 = product of:
      0.029718926 = sum of:
        0.029718926 = product of:
          0.059437852 = sum of:
            0.059437852 = weight(_text_:22 in 912) [ClassicSimilarity], result of:
              0.059437852 = score(doc=912,freq=2.0), product of:
                0.19203177 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.054837555 = queryNorm
                0.30952093 = fieldWeight in 912, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=912)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    16. 3.2023 19:22:55
  6. Rajasurya, S.; Muralidharan, T.; Devi, S.; Swamynathan, S.: Semantic information retrieval using ontology in university domain (2012) 0.01
    0.01442888 = product of:
      0.02885776 = sum of:
        0.02885776 = product of:
          0.05771552 = sum of:
            0.05771552 = weight(_text_:work in 2861) [ClassicSimilarity], result of:
              0.05771552 = score(doc=2861,freq=4.0), product of:
                0.20127523 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.054837555 = queryNorm
                0.28674924 = fieldWeight in 2861, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2861)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Today's conventional search engines hardly do provide the essential content relevant to the user's search query. This is because the context and semantics of the request made by the user is not analyzed to the full extent. So here the need for a semantic web search arises. SWS is upcoming in the area of web search which combines Natural Language Processing and Artificial Intelligence. The objective of the work done here is to design, develop and implement a semantic search engine- SIEU(Semantic Information Extraction in University Domain) confined to the university domain. SIEU uses ontology as a knowledge base for the information retrieval process. It is not just a mere keyword search. It is one layer above what Google or any other search engines retrieve by analyzing just the keywords. Here the query is analyzed both syntactically and semantically. The developed system retrieves the web results more relevant to the user query through keyword expansion. The results obtained here will be accurate enough to satisfy the request made by the user. The level of accuracy will be enhanced since the query is analyzed semantically. The system will be of great use to the developers and researchers who work on web. The Google results are re-ranked and optimized for providing the relevant links. For ranking an algorithm has been applied which fetches more apt results for the user query.
  7. Park, J.S.; O'Brien, J.C.; Cai, C.J.; Ringel Morris, M.; Liang, P.; Bernstein, M.S.: Generative agents : interactive simulacra of human behavior (2023) 0.01
    0.01442888 = product of:
      0.02885776 = sum of:
        0.02885776 = product of:
          0.05771552 = sum of:
            0.05771552 = weight(_text_:work in 972) [ClassicSimilarity], result of:
              0.05771552 = score(doc=972,freq=4.0), product of:
                0.20127523 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.054837555 = queryNorm
                0.28674924 = fieldWeight in 972, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=972)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Believable proxies of human behavior can empower interactive applications ranging from immersive environments to rehearsal spaces for interpersonal communication to prototyping tools. In this paper, we introduce generative agents--computational software agents that simulate believable human behavior. Generative agents wake up, cook breakfast, and head to work; artists paint, while authors write; they form opinions, notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day. To enable generative agents, we describe an architecture that extends a large language model to store a complete record of the agent's experiences using natural language, synthesize those memories over time into higher-level reflections, and retrieve them dynamically to plan behavior. We instantiate generative agents to populate an interactive sandbox environment inspired by The Sims, where end users can interact with a small town of twenty five agents using natural language. In an evaluation, these generative agents produce believable individual and emergent social behaviors: for example, starting with only a single user-specified notion that one agent wants to throw a Valentine's Day party, the agents autonomously spread invitations to the party over the next two days, make new acquaintances, ask each other out on dates to the party, and coordinate to show up for the party together at the right time. We demonstrate through ablation that the components of our agent architecture--observation, planning, and reflection--each contribute critically to the believability of agent behavior. By fusing large language models with computational, interactive agents, this work introduces architectural and interaction patterns for enabling believable simulations of human behavior.
  8. Stoykova, V.; Petkova, E.: Automatic extraction of mathematical terms for precalculus (2012) 0.01
    0.014283863 = product of:
      0.028567726 = sum of:
        0.028567726 = product of:
          0.05713545 = sum of:
            0.05713545 = weight(_text_:work in 156) [ClassicSimilarity], result of:
              0.05713545 = score(doc=156,freq=2.0), product of:
                0.20127523 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.054837555 = queryNorm
                0.28386727 = fieldWeight in 156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=156)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this work, we present the results of research for evaluating a methodology for extracting mathematical terms for precalculus using the techniques for semantically-oriented statistical search. We use the corpus-based approach and the combination of different statistically-based techniques for extracting keywords, collocations and co-occurrences incorporated in the Sketch Engine software. We evaluate the collocations candidate terms for the basic concept function(s) and approve the related methodology by precalculus domain conceptual terms definitions. Finally, we offer a conceptual terms hierarchical representation and discuss the results with respect to their possible applications.
  9. Aydin, Ö.; Karaarslan, E.: OpenAI ChatGPT generated literature review: : digital twin in healthcare (2022) 0.01
    0.008162207 = product of:
      0.016324414 = sum of:
        0.016324414 = product of:
          0.032648828 = sum of:
            0.032648828 = weight(_text_:work in 851) [ClassicSimilarity], result of:
              0.032648828 = score(doc=851,freq=2.0), product of:
                0.20127523 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.054837555 = queryNorm
                0.16220987 = fieldWeight in 851, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.03125 = fieldNorm(doc=851)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Literature review articles are essential to summarize the related work in the selected field. However, covering all related studies takes too much time and effort. This study questions how Artificial Intelligence can be used in this process. We used ChatGPT to create a literature review article to show the stage of the OpenAI ChatGPT artificial intelligence application. As the subject, the applications of Digital Twin in the health field were chosen. Abstracts of the last three years (2020, 2021 and 2022) papers were obtained from the keyword "Digital twin in healthcare" search results on Google Scholar and paraphrased by ChatGPT. Later on, we asked ChatGPT questions. The results are promising; however, the paraphrased parts had significant matches when checked with the Ithenticate tool. This article is the first attempt to show the compilation and expression of knowledge will be accelerated with the help of artificial intelligence. We are still at the beginning of such advances. The future academic publishing process will require less human effort, which in turn will allow academics to focus on their studies. In future studies, we will monitor citations to this study to evaluate the academic validity of the content produced by the ChatGPT. 1. Introduction OpenAI ChatGPT (ChatGPT, 2022) is a chatbot based on the OpenAI GPT-3 language model. It is designed to generate human-like text responses to user input in a conversational context. OpenAI ChatGPT is trained on a large dataset of human conversations and can be used to create responses to a wide range of topics and prompts. The chatbot can be used for customer service, content creation, and language translation tasks, creating replies in multiple languages. OpenAI ChatGPT is available through the OpenAI API, which allows developers to access and integrate the chatbot into their applications and systems. OpenAI ChatGPT is a variant of the GPT (Generative Pre-trained Transformer) language model developed by OpenAI. It is designed to generate human-like text, allowing it to engage in conversation with users naturally and intuitively. OpenAI ChatGPT is trained on a large dataset of human conversations, allowing it to understand and respond to a wide range of topics and contexts. It can be used in various applications, such as chatbots, customer service agents, and language translation systems. OpenAI ChatGPT is a state-of-the-art language model able to generate coherent and natural text that can be indistinguishable from text written by a human. As an artificial intelligence, ChatGPT may need help to change academic writing practices. However, it can provide information and guidance on ways to improve people's academic writing skills.
  10. Brown, T.B.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; Agarwal, S.; Herbert-Voss, A.; Krueger, G.; Henighan, T.; Child, R.; Ramesh, A.; Ziegler, D.M.; Wu, J.; Winter, C.; Hesse, C.; Chen, M.; Sigler, E.; Litwin, M.; Gray, S.; Chess, B.; Clark, J.; Berner, C.; McCandlish, S.; Radford, A.; Sutskever, I.; Amodei, D.: Language models are few-shot learners (2020) 0.01
    0.008162207 = product of:
      0.016324414 = sum of:
        0.016324414 = product of:
          0.032648828 = sum of:
            0.032648828 = weight(_text_:work in 872) [ClassicSimilarity], result of:
              0.032648828 = score(doc=872,freq=2.0), product of:
                0.20127523 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.054837555 = queryNorm
                0.16220987 = fieldWeight in 872, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.03125 = fieldNorm(doc=872)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general.
  11. Rötzer, F.: KI-Programm besser als Menschen im Verständnis natürlicher Sprache (2018) 0.01
    0.0074297315 = product of:
      0.014859463 = sum of:
        0.014859463 = product of:
          0.029718926 = sum of:
            0.029718926 = weight(_text_:22 in 4217) [ClassicSimilarity], result of:
              0.029718926 = score(doc=4217,freq=2.0), product of:
                0.19203177 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.054837555 = queryNorm
                0.15476047 = fieldWeight in 4217, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4217)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2018 11:32:44
  12. Jha, A.: Why GPT-4 isn't all it's cracked up to be (2023) 0.01
    0.0071419314 = product of:
      0.014283863 = sum of:
        0.014283863 = product of:
          0.028567726 = sum of:
            0.028567726 = weight(_text_:work in 923) [ClassicSimilarity], result of:
              0.028567726 = score(doc=923,freq=2.0), product of:
                0.20127523 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.054837555 = queryNorm
                0.14193363 = fieldWeight in 923, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=923)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    He doesn't dismiss the potential of LLMs to become useful assistants in all sorts of ways-Google and Microsoft have already announced that they will be integrating LLMs into their search and office productivity software. But he talked me through some of his criticisms of the technology's apparent capabilities. At the heart of Dr Marcus's thoughtful critique is an attempt to put LLMs into proper context. Deep learning, the underlying technology that makes LLMs work, is only one piece of the puzzle in the quest for machine intelligence. To reach the level of artificial general intelligence (AGI) that many tech companies strive for-i.e. machines that can plan, reason and solve problems in the way human brains can-they will need to deploy a suite of other AI techniques. These include, for example, the kind of "symbolic AI" that was popular before artificial neural networks and deep learning became all the rage.