Search (222 results, page 1 of 12)

  • × type_ss:"el"
  1. Bittner, T.; Donnelly, M.; Winter, S.: Ontology and semantic interoperability (2006) 0.13
    0.12973708 = product of:
      0.25947416 = sum of:
        0.25947416 = sum of:
          0.21295255 = weight(_text_:translating in 4820) [ClassicSimilarity], result of:
            0.21295255 = score(doc=4820,freq=2.0), product of:
              0.4287632 = queryWeight, product of:
                7.4921947 = idf(docFreq=66, maxDocs=44218)
                0.057227984 = queryNorm
              0.49666703 = fieldWeight in 4820, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.4921947 = idf(docFreq=66, maxDocs=44218)
                0.046875 = fieldNorm(doc=4820)
          0.046521608 = weight(_text_:22 in 4820) [ClassicSimilarity], result of:
            0.046521608 = score(doc=4820,freq=2.0), product of:
              0.20040265 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057227984 = queryNorm
              0.23214069 = fieldWeight in 4820, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4820)
      0.5 = coord(1/2)
    
    Abstract
    One of the major problems facing systems for Computer Aided Design (CAD), Architecture Engineering and Construction (AEC) and Geographic Information Systems (GIS) applications today is the lack of interoperability among the various systems. When integrating software applications, substantial di culties can arise in translating information from one application to the other. In this paper, we focus on semantic di culties that arise in software integration. Applications may use di erent terminologies to describe the same domain. Even when appli-cations use the same terminology, they often associate di erent semantics with the terms. This obstructs information exchange among applications. To cir-cumvent this obstacle, we need some way of explicitly specifying the semantics for each terminology in an unambiguous fashion. Ontologies can provide such specification. It will be the task of this paper to explain what ontologies are and how they can be used to facilitate interoperability between software systems used in computer aided design, architecture engineering and construction, and geographic information processing.
    Date
    3.12.2016 18:39:22
  2. Vinyals, O.; Toshev, A.; Bengio, S.; Erhan, D.: ¬A picture is worth a thousand (coherent) words : building a natural description of images (2014) 0.10
    0.09820549 = sum of:
      0.03609434 = product of:
        0.10828302 = sum of:
          0.10828302 = weight(_text_:objects in 1874) [ClassicSimilarity], result of:
            0.10828302 = score(doc=1874,freq=6.0), product of:
              0.30417082 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.057227984 = queryNorm
              0.3559941 = fieldWeight in 1874, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1874)
        0.33333334 = coord(1/3)
      0.062111154 = product of:
        0.12422231 = sum of:
          0.12422231 = weight(_text_:translating in 1874) [ClassicSimilarity], result of:
            0.12422231 = score(doc=1874,freq=2.0), product of:
              0.4287632 = queryWeight, product of:
                7.4921947 = idf(docFreq=66, maxDocs=44218)
                0.057227984 = queryNorm
              0.2897224 = fieldWeight in 1874, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.4921947 = idf(docFreq=66, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1874)
        0.5 = coord(1/2)
    
    Content
    "People can summarize a complex scene in a few words without thinking twice. It's much more difficult for computers. But we've just gotten a bit closer -- we've developed a machine-learning system that can automatically produce captions (like the three above) to accurately describe images the first time it sees them. This kind of system could eventually help visually impaired people understand pictures, provide alternate text for images in parts of the world where mobile connections are slow, and make it easier for everyone to search on Google for images. Recent research has greatly improved object detection, classification, and labeling. But accurately describing a complex scene requires a deeper representation of what's going on in the scene, capturing how the various objects relate to one another and translating it all into natural-sounding language. Many efforts to construct computer-generated natural descriptions of images propose combining current state-of-the-art techniques in both computer vision and natural language processing to form a complete image description approach. But what if we instead merged recent computer vision and language models into a single jointly trained system, taking an image and directly producing a human readable sequence of words to describe it? This idea comes from recent advances in machine translation between languages, where a Recurrent Neural Network (RNN) transforms, say, a French sentence into a vector representation, and a second RNN uses that vector representation to generate a target sentence in German. Now, what if we replaced that first RNN and its input words with a deep Convolutional Neural Network (CNN) trained to classify objects in images? Normally, the CNN's last layer is used in a final Softmax among known classes of objects, assigning a probability that each object might be in the image. But if we remove that final layer, we can instead feed the CNN's rich encoding of the image into a RNN designed to produce phrases. We can then train the whole system directly on images and their captions, so it maximizes the likelihood that descriptions it produces best match the training descriptions for each image.
  3. Understanding metadata (2004) 0.08
    0.07864658 = sum of:
      0.047632173 = product of:
        0.14289652 = sum of:
          0.14289652 = weight(_text_:objects in 2686) [ClassicSimilarity], result of:
            0.14289652 = score(doc=2686,freq=2.0), product of:
              0.30417082 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.057227984 = queryNorm
              0.46979034 = fieldWeight in 2686, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0625 = fieldNorm(doc=2686)
        0.33333334 = coord(1/3)
      0.031014407 = product of:
        0.062028814 = sum of:
          0.062028814 = weight(_text_:22 in 2686) [ClassicSimilarity], result of:
            0.062028814 = score(doc=2686,freq=2.0), product of:
              0.20040265 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057227984 = queryNorm
              0.30952093 = fieldWeight in 2686, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=2686)
        0.5 = coord(1/2)
    
    Abstract
    Metadata (structured information about an object or collection of objects) is increasingly important to libraries, archives, and museums. And although librarians are familiar with a number of issues that apply to creating and using metadata (e.g., authority control, controlled vocabularies, etc.), the world of metadata is nonetheless different than library cataloging, with its own set of challenges. Therefore, whether you are new to these concepts or quite experienced with classic cataloging, this short (20 pages) introductory paper on metadata can be helpful
    Date
    10. 9.2004 10:22:40
  4. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.08
    0.075744346 = product of:
      0.15148869 = sum of:
        0.15148869 = product of:
          0.45446604 = sum of:
            0.45446604 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.45446604 = score(doc=1826,freq=2.0), product of:
                0.48517948 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.057227984 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  5. Priss, U.: Faceted knowledge representation (1999) 0.07
    0.06881575 = sum of:
      0.041678146 = product of:
        0.12503444 = sum of:
          0.12503444 = weight(_text_:objects in 2654) [ClassicSimilarity], result of:
            0.12503444 = score(doc=2654,freq=2.0), product of:
              0.30417082 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.057227984 = queryNorm
              0.41106653 = fieldWeight in 2654, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2654)
        0.33333334 = coord(1/3)
      0.027137605 = product of:
        0.05427521 = sum of:
          0.05427521 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
            0.05427521 = score(doc=2654,freq=2.0), product of:
              0.20040265 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057227984 = queryNorm
              0.2708308 = fieldWeight in 2654, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2654)
        0.5 = coord(1/2)
    
    Abstract
    Faceted Knowledge Representation provides a formalism for implementing knowledge systems. The basic notions of faceted knowledge representation are "unit", "relation", "facet" and "interpretation". Units are atomic elements and can be abstract elements or refer to external objects in an application. Relations are sequences or matrices of 0 and 1's (binary matrices). Facets are relational structures that combine units and relations. Each facet represents an aspect or viewpoint of a knowledge system. Interpretations are mappings that can be used to translate between different representations. This paper introduces the basic notions of faceted knowledge representation. The formalism is applied here to an abstract modeling of a faceted thesaurus as used in information retrieval.
    Date
    22. 1.2016 17:30:31
  6. Turner, J.M.; Mathieu, S.: Audio description text for indexing films (2007) 0.06
    0.062111154 = product of:
      0.12422231 = sum of:
        0.12422231 = product of:
          0.24844462 = sum of:
            0.24844462 = weight(_text_:translating in 701) [ClassicSimilarity], result of:
              0.24844462 = score(doc=701,freq=2.0), product of:
                0.4287632 = queryWeight, product of:
                  7.4921947 = idf(docFreq=66, maxDocs=44218)
                  0.057227984 = queryNorm
                0.5794448 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.4921947 = idf(docFreq=66, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=701)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Access to audiovisual materials should be as open and free as access to print-based materials. However, we have not yet achieved such a reality. Methods useful for organising print-based materials do not necessarily work well when applied to audiovisual and multimedia materials. In this project, we studied using audio description text and written descriptions to generate keywords for indexing moving images. We found that such sources are fruitful and helpful. In the second part of the study, we looked at the possibility of automatically translating keywords from audio description text into other languages to use them as indexing. Here again, the results are encouraging.
  7. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.06
    0.060595475 = product of:
      0.12119095 = sum of:
        0.12119095 = product of:
          0.36357284 = sum of:
            0.36357284 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.36357284 = score(doc=230,freq=2.0), product of:
                0.48517948 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.057227984 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  8. Networked Knowledge Organisation Systems and Services - TPDL 2011 : The 10th European Networked Knowledge Organisation Systems (NKOS) Workshop (2011) 0.05
    0.05323814 = product of:
      0.10647628 = sum of:
        0.10647628 = product of:
          0.21295255 = sum of:
            0.21295255 = weight(_text_:translating in 6033) [ClassicSimilarity], result of:
              0.21295255 = score(doc=6033,freq=2.0), product of:
                0.4287632 = queryWeight, product of:
                  7.4921947 = idf(docFreq=66, maxDocs=44218)
                  0.057227984 = queryNorm
                0.49666703 = fieldWeight in 6033, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.4921947 = idf(docFreq=66, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6033)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Programm mit Links auf die Präsentationen: Armando Stellato, Ahsan Morshed, Gudrun Johannsen, Yves Jacques, Caterina Caracciolo, Sachit Rajbhandari, Imma Subirats, Johannes Keizer: A Collaborative Framework for Managing and Publishing KOS - Christian Mader, Bernhard Haslhofer: Quality Criteria for Controlled Web Vocabularies - Ahsan Morshed, Benjamin Zapilko, Gudrun Johannsen, Philipp Mayr, Johannes Keizer: Evaluating approaches to automatically match thesauri from different domains for Linked Open Data - Johan De Smedt: SKOS extensions to cover mapping requirements - Mark Tomko: Translating biological data sets Into Linked Data - Daniel Kless: Ontologies and thesauri - similarities and differences - Antoine Isaac, Jacco van Ossenbruggen: Europeana and semantic alignment of vocabularies - Douglas Tudhope: Complementary use of ontologies and (other) KOS - Wilko van Hoek, Brigitte Mathiak, Philipp Mayr, Sascha Schüller: Comparing the accuracy of the semantic similarity provided by the Normalized Google Distance (NGD) and the Search Term Recommender (STR) - Denise Bedford: Selecting and Weighting Semantically Discovered Concepts as Social Tags - Stella Dextre Clarke, Johan De Smedt. ISO 25964-1: a new standard for development of thesauri and exchange of thesaurus data
  9. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.04
    0.037872173 = product of:
      0.075744346 = sum of:
        0.075744346 = product of:
          0.22723302 = sum of:
            0.22723302 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.22723302 = score(doc=4388,freq=2.0), product of:
                0.48517948 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.057227984 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  10. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.04
    0.037872173 = product of:
      0.075744346 = sum of:
        0.075744346 = product of:
          0.22723302 = sum of:
            0.22723302 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.22723302 = score(doc=5669,freq=2.0), product of:
                0.48517948 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.057227984 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  11. Rauber, A.: Digital preservation in data-driven science : on the importance of process capture, preservation and validation (2012) 0.04
    0.03572413 = product of:
      0.07144826 = sum of:
        0.07144826 = product of:
          0.21434477 = sum of:
            0.21434477 = weight(_text_:objects in 469) [ClassicSimilarity], result of:
              0.21434477 = score(doc=469,freq=8.0), product of:
                0.30417082 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.057227984 = queryNorm
                0.7046855 = fieldWeight in 469, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.046875 = fieldNorm(doc=469)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Current digital preservation is strongly biased towards data objects: digital files of document-style objects, or encapsulated and largely self-contained objects. To provide authenticity and provenance information, comprehensive metadata models are deployed to document information on an object's context. Yet, we claim that simply documenting an objects context may not be sufficient to ensure proper provenance and to fulfill the stated preservation goals. Specifically in e-Science and business settings, capturing, documenting and preserving entire processes may be necessary to meet the preservation goals. We thus present an approach for capturing, documenting and preserving processes, and means to assess their authenticity upon re-execution. We will discuss options as well as limitations and open challenges to achieve sound preservation, speci?cally within scientific processes.
  12. EuropeanaTech and Multilinguality : Issue 1 of EuropeanaTech Insight (2015) 0.04
    0.03549209 = product of:
      0.07098418 = sum of:
        0.07098418 = product of:
          0.14196835 = sum of:
            0.14196835 = weight(_text_:translating in 1832) [ClassicSimilarity], result of:
              0.14196835 = score(doc=1832,freq=2.0), product of:
                0.4287632 = queryWeight, product of:
                  7.4921947 = idf(docFreq=66, maxDocs=44218)
                  0.057227984 = queryNorm
                0.33111134 = fieldWeight in 1832, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.4921947 = idf(docFreq=66, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1832)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Welcome to the very first issue of EuropeanaTech Insight, a multimedia publication about research and development within the EuropeanaTech community. EuropeanaTech is a very active community. It spans all of Europe and is made up of technical experts from the various disciplines within digital cultural heritage. At any given moment, members can be found presenting their work in project meetings, seminars and conferences around the world. Now, through EuropeanaTech Insight, we can share that inspiring work with the whole community. In our first three issues, we're showcasing topics discussed at the EuropeanaTech 2015 Conference, an exciting event that gave rise to lots of innovative ideas and fruitful conversations on the themes of data quality, data modelling, open data, data re-use, multilingualism and discovery. Welcome, bienvenue, bienvenido, Välkommen, Tervetuloa to the first Issue of EuropeanaTech Insight. Are we talking your language? No? Well I can guarantee you Europeana is. One of the European Union's great beauties and strengths is its diversity. That diversity is perhaps most evident in the 24 different languages spoken in the EU. Making it possible for all European citizens to easily and seamlessly communicate in their native language with others who do not speak that language is a huge technical undertaking. Translating documents, news, speeches and historical texts was once exclusively done manually. Clearly, that takes a huge amount of time and resources and means that not everything can be translated... However, with the advances in machine and automatic translation, it's becoming more possible to provide instant and pretty accurate translations. Europeana provides access to over 40 million digitised cultural heritage offering content in over 33 languages. But what value does Europeana provide if people can only find results in their native language? None. That's why the EuropeanaTech community is collectively working towards making it more possible for everyone to discover our collections in their native language. In this issue of EuropeanaTech Insight, we hear from community members who are making great strides in machine translation and enrichment tools to help improve not only access to data, but also how we retrieve, browse and understand it.
  13. Miles, A.; Matthews, B.; Beckett, D.; Brickley, D.; Wilson, M.; Rogers, N.: SKOS: A language to describe simple knowledge structures for the web (2005) 0.03
    0.031055577 = product of:
      0.062111154 = sum of:
        0.062111154 = product of:
          0.12422231 = sum of:
            0.12422231 = weight(_text_:translating in 517) [ClassicSimilarity], result of:
              0.12422231 = score(doc=517,freq=2.0), product of:
                0.4287632 = queryWeight, product of:
                  7.4921947 = idf(docFreq=66, maxDocs=44218)
                  0.057227984 = queryNorm
                0.2897224 = fieldWeight in 517, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.4921947 = idf(docFreq=66, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=517)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    "Textual content-based search engines for the web have a number of limitations. Firstly, many web resources have little or no textual content (images, audio or video streams etc.) Secondly, precision is low where natural language terms have overloaded meaning (e.g. 'bank', 'watch', 'chip' etc.) Thirdly, recall is incomplete where the search does not take account of synonyms or quasi-synonyms. Fourthly, there is no basis for assisting a user in modifying (expanding, refining, translating) a search based on the meaning of the original search. Fifthly, there is no basis for searching across natural languages, or framing search queries in terms of symbolic languages. The Semantic Web is a framework for creating, managing, publishing and searching semantically rich metadata for web resources. Annotating web resources with precise and meaningful statements about conceptual aspects of their content provides a basis for overcoming all of the limitations of textual content-based search engines listed above. Creating this type of metadata requires that metadata generators are able to refer to shared repositories of meaning: 'vocabularies' of concepts that are common to a community, and describe the domain of interest for that community.
  14. Information als Rohstoff für Innovation : Programm der Bundesregierung 1996-2000 (1996) 0.03
    0.031014407 = product of:
      0.062028814 = sum of:
        0.062028814 = product of:
          0.12405763 = sum of:
            0.12405763 = weight(_text_:22 in 5449) [ClassicSimilarity], result of:
              0.12405763 = score(doc=5449,freq=2.0), product of:
                0.20040265 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.057227984 = queryNorm
                0.61904186 = fieldWeight in 5449, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=5449)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 2.1997 19:26:34
  15. Ask me[@sk.me]: your global information guide : der Wegweiser durch die Informationswelten (1996) 0.03
    0.031014407 = product of:
      0.062028814 = sum of:
        0.062028814 = product of:
          0.12405763 = sum of:
            0.12405763 = weight(_text_:22 in 5837) [ClassicSimilarity], result of:
              0.12405763 = score(doc=5837,freq=2.0), product of:
                0.20040265 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.057227984 = queryNorm
                0.61904186 = fieldWeight in 5837, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=5837)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    30.11.1996 13:22:37
  16. Kosmos Weltatlas 2000 : Der Kompass für das 21. Jahrhundert. Inklusive Welt-Routenplaner (1999) 0.03
    0.031014407 = product of:
      0.062028814 = sum of:
        0.062028814 = product of:
          0.12405763 = sum of:
            0.12405763 = weight(_text_:22 in 4085) [ClassicSimilarity], result of:
              0.12405763 = score(doc=4085,freq=2.0), product of:
                0.20040265 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.057227984 = queryNorm
                0.61904186 = fieldWeight in 4085, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=4085)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    7.11.1999 18:22:39
  17. Mitchell, J.S.: DDC 22 : an introduction (2003) 0.03
    0.030340767 = product of:
      0.060681533 = sum of:
        0.060681533 = product of:
          0.121363066 = sum of:
            0.121363066 = weight(_text_:22 in 1936) [ClassicSimilarity], result of:
              0.121363066 = score(doc=1936,freq=10.0), product of:
                0.20040265 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.057227984 = queryNorm
                0.6055961 = fieldWeight in 1936, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1936)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Dewey Decimal Classification and Relative Index, Edition 22 (DDC 22) will be issued simultaneously in print and web versions in July 2003. The new edition is the first full print update to the Dewey Decimal Classification system in seven years-it includes several significant updates and many new numbers and topics. DDC 22 also features some fundamental structural changes that have been introduced with the goals of promoting classifier efficiency and improving the DDC for use in a variety of applications in the web environment. Most importantly, the content of the new edition has been shaped by the needs and recommendations of Dewey users around the world. The worldwide user community has an important role in shaping the future of the DDC.
    Object
    DDC-22
  18. Faceted classification of information (o.J.) 0.03
    0.02977011 = product of:
      0.05954022 = sum of:
        0.05954022 = product of:
          0.17862065 = sum of:
            0.17862065 = weight(_text_:objects in 2653) [ClassicSimilarity], result of:
              0.17862065 = score(doc=2653,freq=2.0), product of:
                0.30417082 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.057227984 = queryNorm
                0.58723795 = fieldWeight in 2653, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2653)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    An explanation of faceted classification meant for people working in knowledge management. An example given for a high-technology company has the fundamental categories Products, Applications, Organizations, People, Domain objects ("technologies applied in the marketplace in which the organization participates"), Events (i.e. time), and Publications.
  19. Koster, L.: Persistent identifiers for heritage objects (2020) 0.03
    0.02977011 = product of:
      0.05954022 = sum of:
        0.05954022 = product of:
          0.17862065 = sum of:
            0.17862065 = weight(_text_:objects in 5718) [ClassicSimilarity], result of:
              0.17862065 = score(doc=5718,freq=8.0), product of:
                0.30417082 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.057227984 = queryNorm
                0.58723795 = fieldWeight in 5718, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5718)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Persistent identifiers (PID's) are essential for getting access and referring to library, archive and museum (LAM) collection objects in a sustainable and unambiguous way, both internally and externally. Heritage institutions need a universal policy for the use of PID's in order to have an efficient digital infrastructure at their disposal and to achieve optimal interoperability, leading to open data, open collections and efficient resource management. Here the discussion is limited to PID's that institutions can assign to objects they own or administer themselves. PID's for people, subjects etc. can be used by heritage institutions, but are generally managed by other parties. The first part of this article consists of a general theoretical description of persistent identifiers. First of all, I discuss the questions of what persistent identifiers are and what they are not, and what is needed to administer and use them. The most commonly used existing PID systems are briefly characterized. Then I discuss the types of objects PID's can be assigned to. This section concludes with an overview of the requirements that apply if PIDs should also be used for linked data. The second part examines current infrastructural practices, and existing PID systems and their advantages and shortcomings. Based on these practical issues and the pros and cons of existing PID systems a list of requirements for PID systems is presented which is used to address a number of practical considerations. This section concludes with a number of recommendations.
  20. Arms, W.Y.; Blanchi, C.; Overly, E.A.: ¬An architecture for information in digital libraries (1997) 0.03
    0.029470904 = product of:
      0.058941808 = sum of:
        0.058941808 = product of:
          0.17682542 = sum of:
            0.17682542 = weight(_text_:objects in 1260) [ClassicSimilarity], result of:
              0.17682542 = score(doc=1260,freq=16.0), product of:
                0.30417082 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.057227984 = queryNorm
                0.5813359 = fieldWeight in 1260, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1260)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Flexible organization of information is one of the key design challenges in any digital library. For the past year, we have been working with members of the National Digital Library Project (NDLP) at the Library of Congress to build an experimental system to organize and store library collections. This is a report on the work. In particular, we describe how a few technical building blocks are used to organize the material in collections, such as the NDLP's, and how these methods fit into a general distributed computing framework. The technical building blocks are part of a framework that evolved as part of the Computer Science Technical Reports Project (CSTR). This framework is described in the paper, "A Framework for Distributed Digital Object Services", by Robert Kahn and Robert Wilensky (1995). The main building blocks are: "digital objects", which are used to manage digital material in a networked environment; "handles", which identify digital objects and other network resources; and "repositories", in which digital objects are stored. These concepts are amplified in "Key Concepts in the Architecture of the Digital Library", by William Y. Arms (1995). In summer 1995, after earlier experimental development, work began on the implementation of a full digital library system based on this framework. In addition to Kahn/Wilensky and Arms, several working papers further elaborate on the design concepts. A paper by Carl Lagoze and David Ely, "Implementation Issues in an Open Architectural Framework for Digital Object Services", delves into some of the repository concepts. The initial repository implementation was based on a paper by Carl Lagoze, Robert McGrath, Ed Overly and Nancy Yeager, "A Design for Inter-Operable Secure Object Stores (ISOS)". Work on the handle system, which began in 1992, is described in a series of papers that can be found on the Handle Home Page. The National Digital Library Program (NDLP) at the Library of Congress is a large scale project to convert historic collections to digital form and make them widely available over the Internet. The program is described in two articles by Caroline R. Arms, "Historical Collections for the National Digital Library". The NDLP itself draws on experience gained through the earlier American Memory Program. Based on this work, we have built a pilot system that demonstrates how digital objects can be used to organize complex materials, such as those found in the NDLP. The pilot was demonstrated to members of the library in July 1996. The pilot system includes the handle system for identifying digital objects, a pilot repository to store them, and two user interfaces: one designed for librarians to manage digital objects in the repository, the other for library patrons to access the materials stored in the repository. Materials from the NDLP's Coolidge Consumerism compilation have been deposited into the pilot repository. They include a variety of photographs and texts, converted to digital form. The pilot demonstrates the use of handles for identifying such material, the use of meta-objects for managing sets of digital objects, and the choice of metadata. We are now implementing an enhanced prototype system for completion in early 1997.

Years

Languages

  • e 129
  • d 86
  • el 2
  • a 1
  • f 1
  • nl 1
  • More… Less…

Types

  • a 112
  • i 10
  • m 5
  • s 5
  • r 3
  • b 2
  • n 1
  • x 1
  • More… Less…