Search (1537 results, page 77 of 77)

  • × type_ss:"el"
  1. Seeliger, F.: ¬A tool for systematic visualization of controlled descriptors and their relation to others as a rich context for a discovery system (2015) 0.00
    1.3155391E-4 = product of:
      0.0019733086 = sum of:
        0.0019733086 = product of:
          0.0039466172 = sum of:
            0.0039466172 = weight(_text_:information in 2547) [ClassicSimilarity], result of:
              0.0039466172 = score(doc=2547,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.0775819 = fieldWeight in 2547, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2547)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    The discovery service (a search engine and service called WILBERT) used at our library at the Technical University of Applied Sciences Wildau (TUAS Wildau) is comprised of more than 8 million items. If we were to record all licensed publications in this tool to a higher level of articles, including their bibliographic records and full texts, we would have a holding estimated at a hundred million documents. A lot of features, such as ranking, autocompletion, multi-faceted classification, refining opportunities reduce the number of hits. However, it is not enough to give intuitive support for a systematic overview of topics related to documents in the library. John Naisbitt once said: "We are drowning in information, but starving for knowledge." This quote is still very true today. Two years ago, we started to develop micro thesauri for MINT topics in order to develop an advanced indexing of the library stock. We use iQvoc as a vocabulary management system to create the thesaurus. It provides an easy-to-use browser interface that builds a SKOS thesaurus in the background. The purpose of this is to integrate the thesauri in WILBERT in order to offer a better subject-related search. This approach especially supports first-year students by giving them the possibility to browse through a hierarchical alignment of a subject, for instance, logistics or computer science, and thereby discover how the terms are related. It also supports the students with an insight into established abbreviations and alternative labels. Students at the TUAS Wildau were involved in the developmental process of the software regarding the interface and functionality of iQvoc. The first steps have been taken and involve the inclusion of 3000 terms in our discovery tool WILBERT.
  2. Chaves Guimarães, J.A.; Pinho, F.A.; Martínez-Ávila, D.; Campbell, D.G.; Nascimento, F.A.: Knowledge organization and the power to name : LGBTQ terminology and the polyhedron of empowerment (2017) 0.00
    1.3155391E-4 = product of:
      0.0019733086 = sum of:
        0.0019733086 = product of:
          0.0039466172 = sum of:
            0.0039466172 = weight(_text_:information in 3873) [ClassicSimilarity], result of:
              0.0039466172 = score(doc=3873,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.0775819 = fieldWeight in 3873, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3873)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    This paper uses Hope Olson's concept of "the power to name" to explore the terminological practices of the LGBTQ community in the Cariri region of Brazil in the years between 2006 and 2013. LGBTQ communities can seize back the "power to name," traditionally exerted by a heteronormative society upon marginalized groups, by organizing their cultural and practical knowledge from within, and by exercising the power to name themselves and their specific domains and cultural practices. The study showed that knowledge organization - the act of defining entities and categories and assigning specific names to them - is a gesture of self-empowerment on many different levels. The "power of self-naming" in this LGBTQ community is a polyhedron in which some facets are frequent, such as the power to empower or affirm an identity. On the one hand, the names and categories break through gender, geographical and temporal specificity to embrace terms, names, and idioms drawn from a range of different countries, traditions, languages, and time periods. On the other hand, these names and categories work to reinforce and affirm the geographical and cultural specificity of the Cariri region itself, embedding its pride and self-affirmation within the varied languages and heteronormative history of Portuguese colonization in that region. In selecting terms and categories to name, organize and celebrate their identities, the LGBTQ people of Cariri have taken the power to name: not as information intermediaries striving for objectivity and neutrality, but as committed members of a marginalized but vital community.
  3. Harnett, K.: Machine learning confronts the elephant in the room : a visual prank exposes an Achilles' heel of computer vision systems: Unlike humans, they can't do a double take (2018) 0.00
    1.3155391E-4 = product of:
      0.0019733086 = sum of:
        0.0019733086 = product of:
          0.0039466172 = sum of:
            0.0039466172 = weight(_text_:information in 4449) [ClassicSimilarity], result of:
              0.0039466172 = score(doc=4449,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.0775819 = fieldWeight in 4449, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4449)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Theme
    Information
  4. Berners-Lee, T.: ¬The Father of the Web will give the Internet back to the people (2018) 0.00
    1.3155391E-4 = product of:
      0.0019733086 = sum of:
        0.0019733086 = product of:
          0.0039466172 = sum of:
            0.0039466172 = weight(_text_:information in 4495) [ClassicSimilarity], result of:
              0.0039466172 = score(doc=4495,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.0775819 = fieldWeight in 4495, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4495)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Content
    "This week, Berners-Lee will launch Inrupt ( https://www.password-online.de/?email_id=571&user_id=1045&urlpassed=aHR0cHM6Ly93d3cuaW5ydXB0LmNvbQ&controller=stats&action=analyse&wysija-page=1&wysijap=subscriptions ), a startup that he has been building, in stealth mode, for the past nine months. For years now, Berners-Lee and other internet activists have been dreaming of a digital utopia where individuals control their own data and the internet remains free and open. But for Berners-Lee, the time for dreaming is over. "We have to do it now," he says, displaying an intensity and urgency that is uncharacteristic for this soft-spoken academic. "It's a historical moment." If all goes as planned, Inrupt will be to Solid what Netscape once was for many first-time users of the web: an easy way in. . On his screen, there is a simple-looking web page with tabs across the top: Tim's to-do list, his calendar, chats, address book. He built this app-one of the first on Solid for his personal use. It is simple, spare. In fact, it's so plain that, at first glance, it's hard to see its significance. But to Berners-Lee, this is where the revolution begins. The app, using Solid's decentralized technology, allows Berners-Lee to access all of his data seamlessly-his calendar, his music library, videos, chat, research. It's like a mashup of Google Drive, Microsoft Outlook, Slack, Spotify, and WhatsApp. The difference here is that, on Solid, all the information is under his control. In: Exclusive: Tim Berners-Lee tells us his radical new plan to upend the World Wide Web ( https://www.password-online.de/?email_id=571&user_id=1045&urlpassed=aHR0cHM6Ly93d3cuZmFzdGNvbXBhbnkuY29tLzkwMjQzOTM2L2V4Y2x1c2l2ZS10aW0tYmVybmVycy1sZWUtdGVsbHMtdXMtaGlzLXJhZGljYWwtbmV3LXBsYW4tdG8tdXBlbmQtdGhlLXdvcmxkLXdpZGUtd2Vi&controller=stats&action=analyse&wysija-page=1&wysijap=subscriptions ), in: https://www.fastcompany.com/90243936/exclusive-tim-berners-lee-tells-us-his-radical-new-plan-to-upend-the-world-wide-web ( https://www.password-online.de/?email_id=571&user_id=1045&urlpassed=aHR0cHM6Ly93d3cuZmFzdGNvbXBhbnkuY29tLzkwMjQzOTM2L2V4Y2x1c2l2ZS10aW0tYmVybmVycy1sZWUtdGVsbHMtdXMtaGlzLXJhZGljYWwtbmV3LXBsYW4tdG8tdXBlbmQtdGhlLXdvcmxkLXdpZGUtd2Vi&controller=stats&action=analyse&wysija-page=1&wysijap=subscriptions)."
  5. Aydin, Ö.; Karaarslan, E.: OpenAI ChatGPT generated literature review: : digital twin in healthcare (2022) 0.00
    1.3155391E-4 = product of:
      0.0019733086 = sum of:
        0.0019733086 = product of:
          0.0039466172 = sum of:
            0.0039466172 = weight(_text_:information in 851) [ClassicSimilarity], result of:
              0.0039466172 = score(doc=851,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.0775819 = fieldWeight in 851, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=851)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Literature review articles are essential to summarize the related work in the selected field. However, covering all related studies takes too much time and effort. This study questions how Artificial Intelligence can be used in this process. We used ChatGPT to create a literature review article to show the stage of the OpenAI ChatGPT artificial intelligence application. As the subject, the applications of Digital Twin in the health field were chosen. Abstracts of the last three years (2020, 2021 and 2022) papers were obtained from the keyword "Digital twin in healthcare" search results on Google Scholar and paraphrased by ChatGPT. Later on, we asked ChatGPT questions. The results are promising; however, the paraphrased parts had significant matches when checked with the Ithenticate tool. This article is the first attempt to show the compilation and expression of knowledge will be accelerated with the help of artificial intelligence. We are still at the beginning of such advances. The future academic publishing process will require less human effort, which in turn will allow academics to focus on their studies. In future studies, we will monitor citations to this study to evaluate the academic validity of the content produced by the ChatGPT. 1. Introduction OpenAI ChatGPT (ChatGPT, 2022) is a chatbot based on the OpenAI GPT-3 language model. It is designed to generate human-like text responses to user input in a conversational context. OpenAI ChatGPT is trained on a large dataset of human conversations and can be used to create responses to a wide range of topics and prompts. The chatbot can be used for customer service, content creation, and language translation tasks, creating replies in multiple languages. OpenAI ChatGPT is available through the OpenAI API, which allows developers to access and integrate the chatbot into their applications and systems. OpenAI ChatGPT is a variant of the GPT (Generative Pre-trained Transformer) language model developed by OpenAI. It is designed to generate human-like text, allowing it to engage in conversation with users naturally and intuitively. OpenAI ChatGPT is trained on a large dataset of human conversations, allowing it to understand and respond to a wide range of topics and contexts. It can be used in various applications, such as chatbots, customer service agents, and language translation systems. OpenAI ChatGPT is a state-of-the-art language model able to generate coherent and natural text that can be indistinguishable from text written by a human. As an artificial intelligence, ChatGPT may need help to change academic writing practices. However, it can provide information and guidance on ways to improve people's academic writing skills.
  6. Paskin, N.: Identifier interoperability : a report on two recent ISO activities (2006) 0.00
    1.1627833E-4 = product of:
      0.0017441749 = sum of:
        0.0017441749 = product of:
          0.0034883497 = sum of:
            0.0034883497 = weight(_text_:information in 1179) [ClassicSimilarity], result of:
              0.0034883497 = score(doc=1179,freq=4.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.068573356 = fieldWeight in 1179, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1179)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Two significant activities within ISO, the International Organisation for Standardization, are underway, each of which has potential implications for the management of content by digital libraries and their users. Moreover these two activities are complementary and have the potential to provide tools for significantly improved identifier interoperability. This article presents a report on these: the first activity investigates the practical implications of interoperability across the family of ISO TC46/SC9 identifiers (better known as the ISBN and related identifiers); the second activity is the implementation of an ontology-based data dictionary that could provide a mechanism for this, the ISO/IEC 21000-6 standard. ISO/TC 46 is the ISO Technical Committee responsible for standards of "Information and documentation". Subcommittee 9 (SC9) of that body is responsible for "Presentation, identification and description of documents": the standards that it manages are identifiers familiar to the content and digital library communities, including the International Standard Book Number (ISBN); International Standard Serial Number (ISSN); International Standard Recording Code (ISRC); International Standard Music Number (ISMN); International Standard Audio-visual Number (ISAN) and the related Version identifier for Audio-visual Works (V-ISAN); and the International Standard Musical Work Code (ISWC). Most recently ISO has introduced the International Standard Text Code (ISTC), and is about to consider standardisation of the DOI system. The ISO identifier schemes provide numbering schemes as labels of entities of "content": many of the identifiers have as referents abstract content entities ("works" rather than a specific physical or digital form: e.g., ISAN, ISWC, ISTC). The existing schemes are numbering management schemes, not tied to any specific implementation (hence for internet "actionability", these identifiers may be incorporated into URN, URI, or DOI formats, etc.). Recently SC9 has requested that new and revised identifier schemes specify mandatory structured metadata to specify the item identified; that metadata is now becoming key to interoperability.
    Section 2 below is based extensively on the report of the output from that workshop, with minor editorial changes to reflect points raised in the subsequent discussion. The second activity, not yet widely appreciated as being related, is the development of a content-focussed data dictionary within MPEG. ISO/IEC JTC 1/SC29, The Moving Picture Experts Group (MPEG), is formally a joint working group of ISO and the International Electrotechnical Commission. Originally best known for compression standards for audio, MPEG now includes the MPEG-21 "Multimedia Framework", which includes several components of digital rights management technology standardisation. Some of the components are already being used in digital library activities. One component is a Rights Data Dictionary that was established as a component to support activities such as the MPEG Rights Expression Language. In April 2005, the ISO/IEC Technical Management Board appointed a Registration Authority for the MPEG 21 Rights Data Dictionary (ISO/IEC Information technology - Multimedia framework (MPEG-21) - Part 6: Rights Data Dictionary, ISO/IEC 21000-6), and an implementation of the dictionary is about to be launched. However, the Dictionary design is based on a generic interoperability framework, and it will offer extensive additional possibilities. The design of the dictionary goes back to one of the major studies of the conceptual model of interoperability, <indecs>. Section 3 below provides a brief summary of the origins and possible applications of the ISO/IEC 21000-6 Dictionary.
  7. Miles, A.; Matthews, B.; Beckett, D.; Brickley, D.; Wilson, M.; Rogers, N.: SKOS: A language to describe simple knowledge structures for the web (2005) 0.00
    1.1510967E-4 = product of:
      0.001726645 = sum of:
        0.001726645 = product of:
          0.00345329 = sum of:
            0.00345329 = weight(_text_:information in 517) [ClassicSimilarity], result of:
              0.00345329 = score(doc=517,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.06788416 = fieldWeight in 517, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=517)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Content
    This type of effort is common in the digital library community, where a group of experts will interact with a user community to create a thesaurus for a specific domain (e.g. the Art & Architecture Thesaurus AAT AAT) or an overarching classification scheme (e.g. the Dewey Decimal Classification). A similar type of activity is being undertaken more recently in a less centralised manner by web communities, producing for example the DMOZ web directory DMOZ, or the Topic Exchange for weblog topics Topic Exchange. The web, including the semantic web, provides a medium within which communities can interact and collaboratively build and use vocabularies of concepts. A simple language is required that allows these communities to express the structure and content of their vocabularies in a machine-understandable way, enabling exchange and reuse. The Resource Description Framework (RDF) is an ideal language for making statements about web resources and publishing metadata. However, RDF provides only the low level semantics required to form metadata statements. RDF vocabularies must be built on top of RDF to support the expression of more specific types of information within metadata. Ontology languages such as OWL OWL add a layer of expressive power to RDF, and provide powerful tools for defining complex conceptual structures, which can be used to generate rich metadata. However, the class-oriented, logically precise modelling required to construct useful web ontologies is demanding in terms of expertise, effort, and therefore cost. In many cases this type of modelling may be superfluous or unsuited to requirements. Therefore there is a need for a language for expressing vocabularies of concepts for use in semantically rich metadata, that is powerful enough to support semantically enhanced search, but simple enough to be undemanding in terms of the cost and expertise required to use it."
  8. Thomas, C.; McDonald, R.H.; McDowell, C.S.: Overview - Repositories by the numbers (2007) 0.00
    1.1510967E-4 = product of:
      0.001726645 = sum of:
        0.001726645 = product of:
          0.00345329 = sum of:
            0.00345329 = weight(_text_:information in 1169) [ClassicSimilarity], result of:
              0.00345329 = score(doc=1169,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.06788416 = fieldWeight in 1169, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1169)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Scholarly digital repositories continue to be one of the most dynamic and varying components of the emerging digital research library. Little consensus is evident on matters such as depositing content in disciplinary or institutional repositories, or both. Debates about deposit mandates and access to research have spilled into the political arena and have focused much attention on various aspects of digital repositories, including the economics and patterns of scholarly publishing, systems and technology, governmental and organizational policies, access, accountability, research impact, and the motivations of individual researchers. Scholarly digital repositories are a rich area for both empirical research and philosophical debate, and are the central theme of a growing body of published literature. It is surprising, therefore, that so much is still unknown about the basic nature of digital repositories, including both differences and similarities. As the two Repositories by the Numbers articles in this issue show, digital scholarly repositories are diversifying both in their general nature and in the information they contain. Because there is still much to be discovered or understood at the most basic levels of digital repositories, co-authors Chuck Thomas and Robert H. McDonald and author Cat McDowell offer readers two different but complementary statistical studies of various types of institutional and disciplinary repositories. Re-iterating a theme of many of the recent works presented at the 2nd International Conference on Institutional Repositories, Thomas and McDonald apply statistical techniques to explore patterns of scholarly participation by more than 30,000 authors in several categories of repositories. McDowell reports on her ongoing analysis of the growth and development of institutional repositories in American universities and colleges. Together, these articles reveal new aspects of the digital repository landscape, and present data that will be of immense interest to repository planners and sponsors.
  9. Hammond, T.; Hannay, T.; Lund, B.; Scott, J.: Social bookmarking tools (I) : a general review (2005) 0.00
    1.1510967E-4 = product of:
      0.001726645 = sum of:
        0.001726645 = product of:
          0.00345329 = sum of:
            0.00345329 = weight(_text_:information in 1188) [ClassicSimilarity], result of:
              0.00345329 = score(doc=1188,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.06788416 = fieldWeight in 1188, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1188)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    A number of such utilities are presented here, together with an emergent new class of tools that caters more to the academic communities and that stores not only user-supplied tags, but also structured citation metadata terms wherever it is possible to glean this information from service providers. This provision of rich, structured metadata means that the user is provided with an accurate third-party identification of a document, which could be used to retrieve that document, but is also free to search on user-supplied terms so that documents of interest (or rather, references to documents) can be made discoverable and aggregated with other similar descriptions either recorded by the user or by other users. Matt Biddulph in an XML.com article last year, in which he reviews one of the better known social bookmarking tools, del.icio.us, declares that the "del.icio.us-space has three major axes: users, tags, and URLs". We fully support that assessment but choose to present this deconstruction in a reverse order. This paper thus first recaps a brief history of bookmarks, then discusses the current interest in tagging, moves on to look at certain social issues, and finally considers some of the feature sets offered by the new bookmarking tools. A general review of a number of common social bookmarking tools is presented in the annex. A companion paper describes a case study in more detail: the tool that Nature Publishing Group has made available to the scientific community as an experimental entrée into this field - Connotea; our reasons for endeavouring to provide such a utility; and experiences gained and lessons learned.
  10. Zia, L.L.: new projects and a progress report : ¬The NSF National Science, Technology, Engineering, and Mathematics Education Digital Library (NSDL) program (2001) 0.00
    1.1510967E-4 = product of:
      0.001726645 = sum of:
        0.001726645 = product of:
          0.00345329 = sum of:
            0.00345329 = weight(_text_:information in 1227) [ClassicSimilarity], result of:
              0.00345329 = score(doc=1227,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.06788416 = fieldWeight in 1227, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1227)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Theme
    Information Gateway
  11. Baker, T.: Languages for Dublin Core (1998) 0.00
    1.1510967E-4 = product of:
      0.001726645 = sum of:
        0.001726645 = product of:
          0.00345329 = sum of:
            0.00345329 = weight(_text_:information in 1257) [ClassicSimilarity], result of:
              0.00345329 = score(doc=1257,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.06788416 = fieldWeight in 1257, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1257)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Over the past three years, the Dublin Core Metadata Initiative has achieved a broad international consensus on the semantics of a simple element set for describing electronic resources. Since the first workshop in March 1995, which was reported in the very first issue of D-Lib Magazine, Dublin Core has been the topic of perhaps a dozen articles here. Originally intended to be simple and intuitive enough for authors to tag Web pages without special training, Dublin Core is being adapted now for more specialized uses, from government information and legal deposit to museum informatics and electronic commerce. To meet such specialized requirements, Dublin Core can be customized with additional elements or qualifiers. However, these refinements can compromise interoperability across applications. There are tradeoffs between using specific terms that precisely meet local needs versus general terms that are understood more widely. We can better understand this inevitable tension between simplicity and complexity if we recognize that metadata is a form of human language. With Dublin Core, as with a natural language, people are inclined to stretch definitions, make general terms more specific, specific terms more general, misunderstand intended meanings, and coin new terms. One goal of this paper, therefore, will be to examine the experience of some related ways to seek semantic interoperability through simplicity: planned languages, interlingua constructs, and pidgins. The problem of semantic interoperability is compounded when we consider Dublin Core in translation. All of the workshops, documents, mailing lists, user guides, and working group outputs of the Dublin Core Initiative have been in English. But in many countries and for many applications, people need a metadata standard in their own language. In principle, the broad elements of Dublin Core can be defined equally well in Bulgarian or Hindi. Since Dublin Core is a controlled standard, however, any parallel definitions need to be kept in sync as the standard evolves. Another goal of the paper, then, will be to define the conceptual and organizational problem of maintaining a metadata standard in multiple languages. In addition to a name and definition, which are meant for human consumption, each Dublin Core element has a label, or indexing token, meant for harvesting by search engines. For practical reasons, these machine-readable tokens are English-looking strings such as Creator and Subject (just as HTML tags are called HEAD, BODY, or TITLE). These tokens, which are shared by Dublin Cores in every language, ensure that metadata fields created in any particular language are indexed together across repositories. As symbols of underlying universal semantics, these tokens form the basis of semantic interoperability among the multiple Dublin Cores. As long as we limit ourselves to sharing these indexing tokens among exact translations of a simple set of fifteen broad elements, the definitions of which fit easily onto two pages, the problem of Dublin Core in multiple languages is straightforward. But nothing having to do with human language is ever so simple. Just as speakers of various languages must learn the language of Dublin Core in their own tongues, we must find the right words to talk about a metadata language that is expressable in many discipline-specific jargons and natural languages and that inevitably will evolve and change over time.
  12. Nagy T., I.: Detecting multiword expressions and named entities in natural language texts (2014) 0.00
    1.1510967E-4 = product of:
      0.001726645 = sum of:
        0.001726645 = product of:
          0.00345329 = sum of:
            0.00345329 = weight(_text_:information in 1536) [ClassicSimilarity], result of:
              0.00345329 = score(doc=1536,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.06788416 = fieldWeight in 1536, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1536)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Multiword expressions (MWEs) are lexical items that can be decomposed into single words and display lexical, syntactic, semantic, pragmatic and/or statistical idiosyncrasy (Sag et al., 2002; Kim, 2008; Calzolari et al., 2002). The proper treatment of multiword expressions such as rock 'n' roll and make a decision is essential for many natural language processing (NLP) applications like information extraction and retrieval, terminology extraction and machine translation, and it is important to identify multiword expressions in context. For example, in machine translation we must know that MWEs form one semantic unit, hence their parts should not be translated separately. For this, multiword expressions should be identified first in the text to be translated. The chief aim of this thesis is to develop machine learning-based approaches for the automatic detection of different types of multiword expressions in English and Hungarian natural language texts. In our investigations, we pay attention to the characteristics of different types of multiword expressions such as nominal compounds, multiword named entities and light verb constructions, and we apply novel methods to identify MWEs in raw texts. In the thesis it will be demonstrated that nominal compounds and multiword amed entities may require a similar approach for their automatic detection as they behave in the same way from a linguistic point of view. Furthermore, it will be shown that the automatic detection of light verb constructions can be carried out using two effective machine learning-based approaches.
  13. Brand, A.: CrossRef turns one (2001) 0.00
    9.8665434E-5 = product of:
      0.0014799815 = sum of:
        0.0014799815 = product of:
          0.002959963 = sum of:
            0.002959963 = weight(_text_:information in 1222) [ClassicSimilarity], result of:
              0.002959963 = score(doc=1222,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.058186423 = fieldWeight in 1222, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1222)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    CrossRef, the only full-blown application of the Digital Object Identifier (DOI®) System to date, is now a little over a year old. What started as a cooperative effort among publishers and technologists to prototype DOI-based linking of citations in e-journals evolved into an independent, non-profit enterprise in early 2000. We have made considerable headway during our first year, but there is still much to be done. When CrossRef went live with its collaborative linking service last June, it had enabled reference links in roughly 1,100 journals from a member base of 33 publishers, using a functional prototype system. The DOI-X prototype was described in an article published in D-Lib Magazine in February of 2000. On the occasion of CrossRef's first birthday as a live service, this article provides a non-technical overview of our progress to date and the major hurdles ahead. The electronic medium enriches the research literature arena for all players -- researchers, librarians, and publishers -- in numerous ways. Information has been made easier to discover, to share, and to sell. To take a simple example, the aggregation of book metadata by electronic booksellers was a huge boon to scholars seeking out obscure backlist titles, or discovering books they would never otherwise have known to exist. It was equally a boon for the publishers of those books, who saw an unprecedented surge in sales of backlist titles with the advent of centralized electronic bookselling. In the serials sphere, even in spite of price increases and the turmoil surrounding site licenses for some prime electronic content, libraries overall are now able to offer more content to more of their patrons. Yet undoubtedly, the key enrichment for academics and others navigating a scholarly corpus is linking, and in particular the linking that takes the reader out of one document and into another in the matter of a click or two. Since references are how authors make explicit the links between their work and precedent scholarship, what could be more fundamental to the reader than making those links immediately actionable? That said, automated linking is only really useful from a research perspective if it works across publications and across publishers. Not only do academics think about their own writings and those of their colleagues in terms of "author, title, rough date" -- the name of the journal itself is usually not high on the list of crucial identifying features -- but they are oblivious as to the identity of the publishers of all but their very favorite books and journals.
  14. OWLED 2009; OWL: Experiences and Directions, Sixth International Workshop, Chantilly, Virginia, USA, 23-24 October 2009, Co-located with ISWC 2009. (2009) 0.00
    9.8665434E-5 = product of:
      0.0014799815 = sum of:
        0.0014799815 = product of:
          0.002959963 = sum of:
            0.002959963 = weight(_text_:information in 3391) [ClassicSimilarity], result of:
              0.002959963 = score(doc=3391,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.058186423 = fieldWeight in 3391, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3391)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Content
    Short Papers * A Database Backend for OWL, Jörg Henss, Joachim Kleb and Stephan Grimm. * Unifying SysML and OWL, Henson Graves. * The OWLlink Protocol, Thorsten Liebig, Marko Luther and Olaf Noppens. * A Reasoning Broker Framework for OWL, Juergen Bock, Tuvshintur Tserendorj, Yongchun Xu, Jens Wissmann and Stephan Grimm. * Change Representation For OWL 2 Ontologies, Raul Palma, Peter Haase, Oscar Corcho and Asunción Gómez-Pérez. * Practical Aspects of Query Rewriting for OWL 2, Héctor Pérez-Urbina, Ian Horrocks and Boris Motik. * CSage: Use of a Configurable Semantically Attributed Graph Editor as Framework for Editing and Visualization, Lawrence Levin. * A Conformance Test Suite for the OWL 2 RL/RDF Rules Language and the OWL 2 RDF-Based Semantics, Michael Schneider and Kai Mainzer. * Improving the Data Quality of Relational Databases using OBDA and OWL 2 QL, Olivier Cure. * Temporal Classes and OWL, Natalya Keberle. * Using Ontologies for Medical Image Retrieval - An Experiment, Jasmin Opitz, Bijan Parsia and Ulrike Sattler. * Task Representation and Retrieval in an Ontology-Guided Modelling System, Yuan Ren, Jens Lemcke, Andreas Friesen, Tirdad Rahmani, Srdjan Zivkovic, Boris Gregorcic, Andreas Bartho, Yuting Zhao and Jeff Z. Pan. * A platform for reasoning with OWL-EL knowledge bases in a Peer-to-Peer environment, Alexander De Leon and Michel Dumontier. * Axiomé: a Tool for the Elicitation and Management of SWRL Rules, Saeed Hassanpour, Martin O'Connor and Amar Das. * SQWRL: A Query Language for OWL, Martin O'Connor and Amar Das. * Classifying ELH Ontologies In SQL Databases, Vincent Delaitre and Yevgeny Kazakov. * A Semantic Web Approach to Represent and Retrieve Information in a Corporate Memory, Ana B. Rios-Alvarado, R. Carolina Medina-Ramirez and Ricardo Marcelin-Jimenez. * Towards a Graphical Notation for OWL 2, Elisa Kendall, Roy Bell, Roger Burkhart, Mark Dutra and Evan Wallace.
  15. Dodge, M.: What does the Internet look like, Jellyfish perhaps? : Exploring a visualization of the Internet by Young Hyun of CAIDA (2001) 0.00
    8.2221195E-5 = product of:
      0.0012333179 = sum of:
        0.0012333179 = product of:
          0.0024666358 = sum of:
            0.0024666358 = weight(_text_:information in 1554) [ClassicSimilarity], result of:
              0.0024666358 = score(doc=1554,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.048488684 = fieldWeight in 1554, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1554)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Content
    "The Internet is often likened to an organic entity and this analogy seems particularly appropriate in the light of some striking new visualizations of the complex mesh of Internet pathways. The images are results of a new graph visualization tool, code-named Walrus, being developed by researcher, Young Hyun, at the Cooperative Association for Internet Data Analysis (CAIDA) [1]. Although Walrus is still in early days of development, I think these preliminary results are some of the most intriguing and evocative images of the Internet's structure that we have seen in last year or two. A few years back I spent an enjoyable afternoon at the Monterey Bay Aquarium and I particularly remember a stunning exhibit of jellyfish, which were illuminated with UV light to show their incredibly delicate organic structures, gently pulsing in tanks of inky black water. Jellyfish are some of the strangest, alien, and yet most beautiful, living creatures [2]. Having looked at the Walrus images I began to wonder, perhaps the backbone networks of the Internet look like jellyfish? The image above is a screengrab of a Walrus visualization of a huge graph. The graph data in this particular example depicts Internet topology, as measured by CAIDA's skitter monitor [3] based in London, showing 535,000-odd Internet nodes and over 600,000 links. The nodes, represented by the yellow dots, are a large sample of computers from across the whole range of Internet addresses. Walrus is an interactive visualization tool that allows the analyst to view massive graphs from any position. The graph is projected inside a 3D sphere using a special kind of space based hyperbolic geometry. This is a non-Euclidean space, which has useful distorting properties of making elements at the center of the display much larger than those on the periphery. You interact with the graph in Walrus by selecting a node of interest, which is smoothly moved into the center of the display, and that region of the graph becomes greatly enlarged, enabling you to focus on the fine detail. Yet the rest of the graph remains visible, providing valuable context of the overall structure. (There are some animations available on the website showing Walrus graphs being moved, which give some sense of what this is like.) Hyperbolic space projection is commonly know as "focus+context" in the field of information visualization and has been used to display all kinds of data that can be represented as large graphs in either two and three dimensions [4]. It can be thought of as a moveable fish-eye lens. The Walrus visualization tool draws much from the hyperbolic research by Tamara Munzner [5] as part of her PhD at Stanford. (Map of the Month examined some of Munzner's work from 1996 in an earlier article, Internet Arcs Around The Globe.) Walrus is being developed as a general-purpose visualization tool able to cope with massive directed graphs, in the order of a million nodes. Providing useful and interactively useable visualization of such large volumes of graph data is a tough challenge and is particularly apposite to the task of mapping of Internet backbone infrastructures. In a recent email Map of the Month asked Walrus developer Young Hyun what had been the hardest part of the project thus far. "The greatest difficulty was in determining precisely what Walrus should be about," said Hyun. Crucially "... we had to face the question of what it means to visualize a large graph. It would defeat the aim of a visualization to overload a user with the large volume of data that is likely to be associated with a large graph." I think the preliminary results available show that Walrus is heading in right direction tackling these challenges.
  16. Slavic, A.: Mapping intricacies : UDC to DDC (2010) 0.00
    8.2221195E-5 = product of:
      0.0012333179 = sum of:
        0.0012333179 = product of:
          0.0024666358 = sum of:
            0.0024666358 = weight(_text_:information in 3370) [ClassicSimilarity], result of:
              0.0024666358 = score(doc=3370,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.048488684 = fieldWeight in 3370, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3370)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Content
    Another challenge appears when, e.g., mapping Dewey class 890 Literatures of other specific languages and language families, which does not make sense in UDC in which all languages and literatures have equal status. Standard UDC schedules do not have a selection of preferred literatures and other literatures. In principle, UDC does not allow classes entitled 'others' which do not have defined semantic content. If entities are subdivided and there is no provision for an item outside the listed subclasses then this item is subsumed to a top class or a broader class where all unspecifiied or general members of that class may be expected. If specification is needed this can be divised by adding an alphabetical extension to the broader class. Here we have to find and list in the UDC Summary all literatures that are 'unpreferred' i.e. lumped in the 890 classes and map them again as many-to-one specific-to-broader match. The example below illustrates another interesting case. Classes Dewey 061 and UDC 06 cover roughy the same semantic field but in the subdivision the Dewey Summaries lists a combination of subject and place and as an enumerative classification, provides ready made numbers for combinations of place that are most common in an average (American?) library. This is a frequent approach in the schemes created with the physical book arrangement, i.e. library schelves, in mind. UDC, designed as an indexing language for information retrieval, keeps subject and place in separate tables and allows for any concept of place such as, e.g. (7) North America to be used in combination with any subject as these may coincide in documents. Thus combinations such as Newspapers in North America, or Organizations in North America would not be offered as ready made combinations. There is no selection of 'preferred' or 'most needed countries' or languages or cultures in the standard UDC edition: <Tabelle>
  17. Gonzalez, L.: What is FRBR? (2005) 0.00
    6.5776956E-5 = product of:
      9.866543E-4 = sum of:
        9.866543E-4 = product of:
          0.0019733086 = sum of:
            0.0019733086 = weight(_text_:information in 3401) [ClassicSimilarity], result of:
              0.0019733086 = score(doc=3401,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.03879095 = fieldWeight in 3401, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3401)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Content
    National FRBR experiments The larger the bibliographic database, the greater the effect of "FRBR-like" design in reducing the appearance of duplicate records. LC, RLG, and OCLC, all influenced by FRBR, are experimenting with the redesign of their databases. LC's Network Development and MARC Standards Office has posted at its web site the results of some of its investigations into FRBR and MARC, including possible display options for bibliographic information. The design of RLG's public catalog, RedLightGreen, has been described as "FRBR-ish" by Merrilee Proffitt, RLG's program officer. If you try a search for a prolific author or much-published title in RedLightGreen, you'll probably find that the display of search results is much different than what you would expect. OCLC Research has developed a prototype "frbrized" database for fiction, OCLC FictionFinder. Try a title search for a classic title like Romeo and Juliet and observe that OCLC includes, in the initial display of results (described as "works"), a graphic indicator (stars, ranging from one to five). These show in rough terms how many libraries own the work-Romeo and Juliet clearly gets a five. Indicators like this are something resource sharing staff can consider an "ILL quality rating." If you're intrigued by FRBR's possibilities and what they could mean to resource sharing workflow, start talking. Now is the time to connect with colleagues, your local and/or consortial system vendor, RLG, OCLC, and your professional organizations. Have input into how systems develop in the FRBR world."

Years

Languages

Types

  • a 736
  • i 89
  • m 33
  • x 29
  • r 28
  • s 19
  • b 15
  • n 15
  • p 5
  • l 1
  • More… Less…

Themes

Subjects

Classifications