Search (350 results, page 2 of 18)

  • × type_ss:"el"
  • × year_i:[2010 TO 2020}
  1. Smith, D.A.; Shadbolt, N.R.: FacetOntology : expressive descriptions of facets in the Semantic Web (2012) 0.01
    0.012539252 = product of:
      0.058516506 = sum of:
        0.03019857 = weight(_text_:web in 2208) [ClassicSimilarity], result of:
          0.03019857 = score(doc=2208,freq=6.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.3122631 = fieldWeight in 2208, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2208)
        0.0071344664 = weight(_text_:information in 2208) [ClassicSimilarity], result of:
          0.0071344664 = score(doc=2208,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.13714671 = fieldWeight in 2208, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2208)
        0.021183468 = weight(_text_:retrieval in 2208) [ClassicSimilarity], result of:
          0.021183468 = score(doc=2208,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.23632148 = fieldWeight in 2208, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2208)
      0.21428572 = coord(3/14)
    
    Abstract
    The formal structure of the information on the Semantic Web lends itself to faceted browsing, an information retrieval method where users can filter results based on the values of properties ("facets"). Numerous faceted browsers have been created to browse RDF and Linked Data, but these systems use their own ontologies for defining how data is queried to populate their facets. Since the source data is the same format across these systems (specifically, RDF), we can unify the different methods of describing how to quer the underlying data, to enable compatibility across systems, and provide an extensible base ontology for future systems. To this end, we present FacetOntology, an ontology that defines how to query data to form a faceted browser, and a number of transformations and filters that can be applied to data before it is shown to users. FacetOntology overcomes limitations in the expressivity of existing work, by enabling the full expressivity of SPARQL when selecting data for facets. By applying a FacetOntology definition to data, a set of facets are specified, each with queries and filters to source RDF data, which enables faceted browsing systems to be created using that RDF data.
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
    Semantic Web
  2. Saabiyeh, N.: What is a good ontology semantic similarity measure that considers multiple inheritance cases of concepts? (2018) 0.01
    0.012467211 = product of:
      0.087270476 = sum of:
        0.044992477 = weight(_text_:wide in 4530) [ClassicSimilarity], result of:
          0.044992477 = score(doc=4530,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.342674 = fieldWeight in 4530, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4530)
        0.042278 = weight(_text_:web in 4530) [ClassicSimilarity], result of:
          0.042278 = score(doc=4530,freq=6.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.43716836 = fieldWeight in 4530, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4530)
      0.14285715 = coord(2/14)
    
    Abstract
    I need to measure semantic similarity between CSO ontology concepts, depending on Ontology structure (concept path, depth, least common subsumer (LCS) ...). CSO (Computer Science Ontology) is a large-scale ontology of research areas. A concepts in CSO may have multiple parents/super concepts (i.e. a concept may be a child of many other concepts), e.g. : (world wide web) is parent of (semantic web) (semantics) is parent of (semantic web) I found some measures that meet my needs, but the papers proposing these measures are not cited, so i got hesitated. I also found a measure that depends on weighted edges, but multiple inheritance (super concepts) is not considered..
  3. O'Neill, E.T.; Bennett, R.; Kammerer, K.: Using authorities to improve subject searches (2012) 0.01
    0.01245197 = product of:
      0.058109194 = sum of:
        0.020922182 = weight(_text_:web in 310) [ClassicSimilarity], result of:
          0.020922182 = score(doc=310,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.21634221 = fieldWeight in 310, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=310)
        0.0060537956 = weight(_text_:information in 310) [ClassicSimilarity], result of:
          0.0060537956 = score(doc=310,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.116372846 = fieldWeight in 310, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=310)
        0.031133216 = weight(_text_:retrieval in 310) [ClassicSimilarity], result of:
          0.031133216 = score(doc=310,freq=6.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.34732026 = fieldWeight in 310, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=310)
      0.21428572 = coord(3/14)
    
    Abstract
    Authority files have played an important role in improving the quality of indexing and subject cataloging. Although authorities can significantly improve search by increasing the number of access points, they are rarely an integral part of the information retrieval process, particularly end-users searches. A retrieval prototype, searchFAST, was developed to test the feasibility of using an authority file as an index to bibliographic records. searchFAST uses FAST (Faceted Application of Subject Terminology) as an index to OCLC's WorldCat.org bibliographic database. The searchFAST methodology complements, rather than replaces, existing WorldCat.org access. The bibliographic file is searched indirectly; first the authority file is searched to identify appropriate subject headings, then the headings are used to retrieve the matching bibliographic records. The prototype demonstrates the effectiveness and practicality of using an authority file as an index. Searching the authority file leverages authority control work by increasing the number of access points while supporting a simple interface designed for end-users.
    Source
    Beyond libraries - subject metadata in the digital environment and semantic web. IFLA Satellite Post-Conference, 17-18 August 2012, Tallinn
    Theme
    Verbale Doksprachen im Online-Retrieval
  4. Aslam, S.; Sonkar, S.K.: Semantic Web : an overview (2019) 0.01
    0.012424889 = product of:
      0.08697422 = sum of:
        0.07890249 = weight(_text_:web in 54) [ClassicSimilarity], result of:
          0.07890249 = score(doc=54,freq=16.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.8158776 = fieldWeight in 54, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=54)
        0.008071727 = weight(_text_:information in 54) [ClassicSimilarity], result of:
          0.008071727 = score(doc=54,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.1551638 = fieldWeight in 54, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=54)
      0.14285715 = coord(2/14)
    
    Abstract
    This paper presents the semantic web, web writing content, web technology, goals of semantic and obligation for the expansion of web 3.0. This paper also shows the different components of semantic web and such as HTTP, HTML, XML, XML Schema, URI, RDF, Taxonomy and OWL. To provide valuable information services semantic web execute the benefits of library functions and also to be the best use of library collection are mention here.
    Theme
    Semantic Web
  5. Sy, M.-F.; Ranwez, S.; Montmain, J.; Ragnault, A.; Crampes, M.; Ranwez, V.: User centered and ontology based information retrieval system for life sciences (2012) 0.01
    0.0108491 = product of:
      0.05062913 = sum of:
        0.013948122 = weight(_text_:web in 699) [ClassicSimilarity], result of:
          0.013948122 = score(doc=699,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.14422815 = fieldWeight in 699, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=699)
        0.009885807 = weight(_text_:information in 699) [ClassicSimilarity], result of:
          0.009885807 = score(doc=699,freq=12.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.19003606 = fieldWeight in 699, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=699)
        0.026795205 = weight(_text_:retrieval in 699) [ClassicSimilarity], result of:
          0.026795205 = score(doc=699,freq=10.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.29892567 = fieldWeight in 699, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=699)
      0.21428572 = coord(3/14)
    
    Abstract
    Background: Because of the increasing number of electronic resources, designing efficient tools to retrieve and exploit them is a major challenge. Some improvements have been offered by semantic Web technologies and applications based on domain ontologies. In life science, for instance, the Gene Ontology is widely exploited in genomic applications and the Medical Subject Headings is the basis of biomedical publications indexation and information retrieval process proposed by PubMed. However current search engines suffer from two main drawbacks: there is limited user interaction with the list of retrieved resources and no explanation for their adequacy to the query is provided. Users may thus be confused by the selection and have no idea on how to adapt their queries so that the results match their expectations. Results: This paper describes an information retrieval system that relies on domain ontology to widen the set of relevant documents that is retrieved and that uses a graphical rendering of query results to favor user interactions. Semantic proximities between ontology concepts and aggregating models are used to assess documents adequacy with respect to a query. The selection of documents is displayed in a semantic map to provide graphical indications that make explicit to what extent they match the user's query; this man/machine interface favors a more interactive and iterative exploration of data corpus, by facilitating query concepts weighting and visual explanation. We illustrate the benefit of using this information retrieval system on two case studies one of which aiming at collecting human genes related to transcription factors involved in hemopoiesis pathway. Conclusions: The ontology based information retrieval system described in this paper (OBIRS) is freely available at: http://www.ontotoolkit.mines-ales.fr/ObirsClient/. This environment is a first step towards a user centred application in which the system enlightens relevant information to provide decision help.
  6. Pohl, A.; Steeg, F.: Zurück ins Web : die Entwicklung eines neuen Webauftritts für die Nordrhein-Westfälische Bibliographie (NWBib) (2016) 0.01
    0.010686181 = product of:
      0.07480326 = sum of:
        0.03856498 = weight(_text_:wide in 3063) [ClassicSimilarity], result of:
          0.03856498 = score(doc=3063,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.29372054 = fieldWeight in 3063, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=3063)
        0.036238287 = weight(_text_:web in 3063) [ClassicSimilarity], result of:
          0.036238287 = score(doc=3063,freq=6.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.37471575 = fieldWeight in 3063, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3063)
      0.14285715 = coord(2/14)
    
    Abstract
    Am Hochschulbibliothekszentrum des Landes Nordrhein-Westfalen (hbz) wird seit Anfang 2014 nach Vorgaben und unter Begutachtung der Universitäts- und Landesbibliotheken in Düsseldorf, Münster und Bonn ein neuer Webauftritt für die Landesbibliographie Nordrhein-Westfalens, die Nordrhein-Westfälische Bibliographie (NWBib) entwickelt. Die Entwicklung basiert auf der Web-Schnittstelle des Linked-Open-Data-Dienst lobid und wird vollständig mit Open-Source-Software entwickelt. Aus der Perspektive des Entwicklungsteams am hbz beschreibt der Artikel Kontext und Durchführung des Projekts. Der Beitrag skizziert die historische Entwicklung der NWBib mit Fokus auf die Beziehung der Bibliographie zum World Wide Web (WWW), erläutert die Voraussetzungen für die Neuentwicklung sowie die Leitlinien des Entwicklungsprozesses, gibt einen Überblick über die Nutzung des neuen Webauftritts und die zur Umsetzung verwendete Technologie. Abgeschlossen wir der Artikel mit Lessons-Learned und einem Ausblick auf weitere Entwicklungen.
  7. Laaff, M.: Googles genialer Urahn (2011) 0.01
    0.010581772 = product of:
      0.0370362 = sum of:
        0.016068742 = weight(_text_:wide in 4610) [ClassicSimilarity], result of:
          0.016068742 = score(doc=4610,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.122383565 = fieldWeight in 4610, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4610)
        0.015099285 = weight(_text_:web in 4610) [ClassicSimilarity], result of:
          0.015099285 = score(doc=4610,freq=6.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.15613155 = fieldWeight in 4610, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4610)
        0.0025224148 = weight(_text_:information in 4610) [ClassicSimilarity], result of:
          0.0025224148 = score(doc=4610,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.048488684 = fieldWeight in 4610, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4610)
        0.0033457582 = product of:
          0.010037274 = sum of:
            0.010037274 = weight(_text_:22 in 4610) [ClassicSimilarity], result of:
              0.010037274 = score(doc=4610,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.09672529 = fieldWeight in 4610, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=4610)
          0.33333334 = coord(1/3)
      0.2857143 = coord(4/14)
    
    Content
    Karteikästen, Telefone, Multimedia-Möbel 1934 entwickelte Otlet die Idee eines weltweiten Wissens-"Netzes". Er versuchte, kaum dass Radio und Fernsehen erfunden waren, Multimedia-Konzepte zu entwickeln, um die Kooperationsmöglichkeiten für Forscher zu verbessern. Otlet zerbrach sich den Kopf darüber, wie Wissen über große Distanzen zugänglich gemacht werden kann. Er entwickelte Multimedia-Arbeitsmöbel, die mit Karteikästen, Telefonen und anderen Features das versuchten, was heute an jedem Rechner möglich ist. Auch ohne die Hilfe elektronischer Datenverarbeitung entwickelte er Ideen, deren Umsetzung wir heute unter Begriffen wie Web 2.0 oder Wikipedia kennen. Trotzdem sind sein Name und seine Arbeit heute weitgehend in Vergessenheit geraten. Als Vordenker von Hypertext und Internet gelten die US-Amerikaner Vannevar Bush, Ted Nelson und Douglas Engelbart. Die Überbleibsel der Mundaneum-Sammlung vermoderten jahrzehntelang auf halb verfallenen Dachböden.
    Der Traum vom dynamischen, ständig wachsenden Wissensnetz Auch, weil Otlet bereits darüber nachdachte, wie in seinem vernetzten Wissenskatalog Anmerkungen einfließen könnten, die Fehler korrigieren oder Widerspruch abbilden. Vor dieser Analogie warnt jedoch Charles van den Heuvel von der Königlichen Niederländischen Akademie der Künste und Wissenschaften: Seiner Interpretation zufolge schwebte Otlet ein System vor, in dem Wissen hierarchisch geordnet ist: Nur eine kleine Gruppe von Wissenschaftlern sollte an der Einordnung von Wissen arbeiten; Bearbeitungen und Anmerkungen sollten, anders etwa als bei der Wikipedia, nicht mit der Information verschmelzen, sondern sie lediglich ergänzen. Das Netz, das Otlet sich ausmalte, ging weit über das World Wide Web mit seiner Hypertext-Struktur hinaus. Otlet wollte nicht nur Informationen miteinander verbunden werden - die Links sollten noch zusätzlich mit Bedeutung aufgeladen werden. Viele Experten sind sich einig, dass diese Idee von Otlet viele Parallelen zu dem Konzept des "semantischen Netz" aufweist. Dessen Ziel ist es, die Bedeutung von Informationen für Rechner verwertbar zu machen - so dass Informationen von ihnen interpretiert werden und maschinell weiterverarbeitet werden können. Projekte, die sich an einer Verwirklichung des semantischen Netzes versuchen, könnten von einem Blick auf Otlets Konzepte profitieren, so van den Heuvel, von dessen Überlegungen zu Hierarchie und Zentralisierung in dieser Frage. Im Mundaneum in Mons arbeitet man derzeit daran, Otlets Arbeiten zu digitalisieren, um sie ins Netz zu stellen. Das dürfte zwar noch ziemlich lange dauern, warnt Archivar Gillen. Aber wenn es soweit ist, wird sich endlich Otlets Vision erfüllen: Seine Sammlung des Wissens wird der Welt zugänglich sein. Papierlos, für jeden abrufbar."
    Date
    24.10.2008 14:19:22
    Footnote
    Vgl. unter: http://www.spiegel.de/netzwelt/web/0,1518,768312,00.html.
  8. Hafner, R.; Schelling, B.: Automatisierung der Sacherschließung mit Semantic-Web-Technologie (2015) 0.01
    0.010449948 = product of:
      0.07314963 = sum of:
        0.034519844 = weight(_text_:web in 2471) [ClassicSimilarity], result of:
          0.034519844 = score(doc=2471,freq=4.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.35694647 = fieldWeight in 2471, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2471)
        0.038629785 = weight(_text_:bibliothek in 2471) [ClassicSimilarity], result of:
          0.038629785 = score(doc=2471,freq=2.0), product of:
            0.121660605 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.029633347 = queryNorm
            0.31752092 = fieldWeight in 2471, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2471)
      0.14285715 = coord(2/14)
    
    Abstract
    Der vorliegende Artikel möchte einen Ansatz vorstellen, der aufzeigt, wie die Bibliothek der Universität Konstanz - und andere Bibliotheken mit einer Haussystematik - bei ihrer eigenen Systematik bleiben und trotzdem von der Sacherschließungsarbeit anderer Bibliotheken profitieren können. Vorgestellt wird ein Konzept, das zeigt, wie mithilfe von Semantic-Web-Technologie Ähnlichkeitsrelationen zwischen verbaler Sacherschließung, RVK, DDC und hauseigenen Systematiken erstellt werden können, die das Übersetzen von Sacherschließungsinformationen in andere Ordnungssysteme erlauben und damit Automatisierung in der Sacherschließung möglich machen.
  9. Hartmann, F.: Paul Otlets Hypermedium : Dokumentation als Gegenidee zur Bibliothek (2015) 0.01
    0.0104487995 = product of:
      0.07314159 = sum of:
        0.06243516 = weight(_text_:bibliothek in 1432) [ClassicSimilarity], result of:
          0.06243516 = score(doc=1432,freq=4.0), product of:
            0.121660605 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.029633347 = queryNorm
            0.5131913 = fieldWeight in 1432, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.0625 = fieldNorm(doc=1432)
        0.010706427 = product of:
          0.032119278 = sum of:
            0.032119278 = weight(_text_:22 in 1432) [ClassicSimilarity], result of:
              0.032119278 = score(doc=1432,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.30952093 = fieldWeight in 1432, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1432)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    Schon zur Wende ins 20. Jahrhundert zweifelte der belgische Privatgelehrte Paul Otlet an der Zukunft des Buches und der Bibliothek. Statt dessen begann er damit, eine Dokumentation und Neuorganisation des Weltwissens anzulegen, und mittels eines Karteikartensystems (Répertoire Bibliographique Universel) zu vernetzen. Dieses Projekt eines flexiblen, abfrageorientierten Wissensbestandes in einem 'Hypermedium' (Otlet) besetzte jene technologische Leerstelle, die inzwischen eine die bibliothekarische Epoche aufsprengende neue Wissenskultur der digitalen Medialität produziert hat.
    Date
    22. 8.2016 15:58:46
  10. Eckert, K.: SKOS: eine Sprache für die Übertragung von Thesauri ins Semantic Web (2011) 0.01
    0.010440619 = product of:
      0.073084325 = sum of:
        0.0623779 = weight(_text_:web in 4331) [ClassicSimilarity], result of:
          0.0623779 = score(doc=4331,freq=10.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.6450079 = fieldWeight in 4331, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=4331)
        0.010706427 = product of:
          0.032119278 = sum of:
            0.032119278 = weight(_text_:22 in 4331) [ClassicSimilarity], result of:
              0.032119278 = score(doc=4331,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.30952093 = fieldWeight in 4331, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4331)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    Das Semantic Web - bzw. Linked Data - hat das Potenzial, die Verfügbarkeit von Daten und Wissen, sowie den Zugriff darauf zu revolutionieren. Einen großen Beitrag dazu können Wissensorganisationssysteme wie Thesauri leisten, die die Daten inhaltlich erschließen und strukturieren. Leider sind immer noch viele dieser Systeme lediglich in Buchform oder in speziellen Anwendungen verfügbar. Wie also lassen sie sich für das Semantic Web nutzen? Das Simple Knowledge Organization System (SKOS) bietet eine Möglichkeit, die Wissensorganisationssysteme in eine Form zu "übersetzen", die im Web zitiert und mit anderen Resourcen verknüpft werden kann.
    Date
    15. 3.2011 19:21:22
    Theme
    Semantic Web
  11. Schirrmeister, N.-P.; Keil, S.: Aufbau einer Infrastruktur für Information Retrieval-Evaluationen (2012) 0.01
    0.010234192 = product of:
      0.071639344 = sum of:
        0.01804893 = weight(_text_:information in 3097) [ClassicSimilarity], result of:
          0.01804893 = score(doc=3097,freq=10.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.3469568 = fieldWeight in 3097, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=3097)
        0.05359041 = weight(_text_:retrieval in 3097) [ClassicSimilarity], result of:
          0.05359041 = score(doc=3097,freq=10.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.59785134 = fieldWeight in 3097, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=3097)
      0.14285715 = coord(2/14)
    
    Abstract
    Das Projekt "Aufbau einer Infrastruktur für Information Retrieval-Evaluationen" (AIIRE) bietet eine Softwareinfrastruktur zur Unterstützung von Information Retrieval-Evaluationen (IR-Evaluationen). Die Infrastruktur basiert auf einem Tool-Kit, das bei GESIS im Rahmen des DFG-Projekts IRM entwickelt wurde. Ziel ist es, ein System zu bieten, das zur Forschung und Lehre am Fachbereich Media für IR-Evaluationen genutzt werden kann. This paper describes some aspects of a project called "Aufbau einer Infrastruktur für Information Retrieval-Evaluationen" (AIIRE). Its goal is to build a software-infrastructure which supports the evaluation of information retrieval algorithms.
  12. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.01
    0.010113766 = product of:
      0.07079636 = sum of:
        0.062766545 = weight(_text_:web in 4649) [ClassicSimilarity], result of:
          0.062766545 = score(doc=4649,freq=18.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.64902663 = fieldWeight in 4649, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4649)
        0.008029819 = product of:
          0.024089456 = sum of:
            0.024089456 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
              0.024089456 = score(doc=4649,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.23214069 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    More and more cultural heritage institutions publish their collections, vocabularies and metadata on the Web. The resulting Web of linked cultural data opens up exciting new possibilities for searching and browsing through these cultural heritage collections. We report on ongoing work in which we investigate the estimation of relevance in this Web of Culture. We study existing measures of semantic distance and how they apply to two use cases. The use cases relate to the structured, multilingual and multimodal nature of the Culture Web. We distinguish between measures using the Web, such as Google distance and PMI, and measures using the Linked Data Web, i.e. the semantic structure of metadata vocabularies. We perform a small study in which we compare these semantic distance measures to human judgements of relevance. Although it is too early to draw any definitive conclusions, the study provides new insights into the applicability of semantic distance measures to the Web of Culture, and clear starting points for further research.
    Date
    26.12.2011 13:40:22
    Theme
    Semantic Web
  13. Proceedings of the 2nd International Workshop on Semantic Digital Archives held in conjunction with the 16th Int. Conference on Theory and Practice of Digital Libraries (TPDL) on September 27, 2012 in Paphos, Cyprus (2012) 0.01
    0.009930608 = product of:
      0.046342835 = sum of:
        0.025624339 = weight(_text_:web in 468) [ClassicSimilarity], result of:
          0.025624339 = score(doc=468,freq=12.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.26496404 = fieldWeight in 468, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0234375 = fieldNorm(doc=468)
        0.008008419 = weight(_text_:information in 468) [ClassicSimilarity], result of:
          0.008008419 = score(doc=468,freq=14.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.1539468 = fieldWeight in 468, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=468)
        0.012710081 = weight(_text_:retrieval in 468) [ClassicSimilarity], result of:
          0.012710081 = score(doc=468,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.1417929 = fieldWeight in 468, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0234375 = fieldNorm(doc=468)
      0.21428572 = coord(3/14)
    
    Abstract
    Archival Information Systems (AIS) are becoming increasingly important. For decades, the amount of content created digitally is growing and its complete life cycle nowadays tends to remain digital. A selection of this content is expected to be of value for the future and can thus be considered being part of our cultural heritage. However, digital content poses many challenges for long-term or indefinite preservation, e.g. digital publications become increasingly complex by the embedding of different kinds of multimedia, data in arbitrary formats and software. As soon as these digital publications become obsolete, but are still deemed to be of value in the future, they have to be transferred smoothly into appropriate AIS where they need to be kept accessible even through changing technologies. The successful previous SDA workshop in 2011 showed: Both, the library and the archiving community have made valuable contributions to the management of huge amounts of knowledge and data. However, both are approaching this topic from different views which shall be brought together to cross-fertilize each other. There are promising combinations of pertinence and provenance models since those are traditionally the prevailing knowledge organization principles of the library and archiving community, respectively. Another scientific discipline providing promising technical solutions for knowledge representation and knowledge management is semantic technologies, which is supported by appropriate W3C recommendations and a large user community. At the forefront of making the semantic web a mature and applicable reality is the linked data initiative, which already has started to be adopted by the library community. It can be expected that using semantic (web) technologies in general and linked data in particular can mature the area of digital archiving as well as technologically tighten the natural bond between digital libraries and digital archives. Semantic representations of contextual knowledge about cultural heritage objects will enhance organization and access of data and knowledge. In order to achieve a comprehensive investigation, the information seeking and document triage behaviors of users (an area also classified under the field of Human Computer Interaction) will also be included in the research.
    Semantic search & semantic information retrieval in digital archives and digital libraries Semantic multimedia archives Ontologies & linked data for digital archives and digital libraries Ontologies & linked data for multimedia archives Implementations and evaluations of semantic digital archives Visualization and exploration of digital content User interfaces for semantic digital libraries User interfaces for intelligent multimedia information retrieval User studies focusing on end-user needs and information seeking behavior of end-users Theoretical and practical archiving frameworks using Semantic (Web) technologies Logical theories for digital archives Semantic (Web) services implementing the OAIS standard Semantic or logical provenance models for digital archives or digital libraries Information integration/semantic ingest (e.g. from digital libraries) Trust for ingest and data security/integrity check for long-term storage of archival records Semantic extensions of emulation/virtualization methodologies tailored for digital archives Semantic long-term storage and hardware organization tailored for AIS Migration strategies based on Semantic (Web) technologies Knowledge evolution We expect new insights and results for sustainable technical solutions for digital archiving using knowledge management techniques based on semantic technologies. The workshop emphasizes interdisciplinarity and aims at an audience consisting of scientists and scholars from the digital library, digital archiving, multimedia technology and semantic web community, the information and library sciences, as well as, from the social sciences and (digital) humanities, in particular people working on the mentioned topics. We encourage end-users, practitioners and policy-makers from cultural heritage institutions to participate as well.
  14. Daudaravicius, V.: ¬A framework for keyphrase extraction from scientific journals (2016) 0.01
    0.009914528 = product of:
      0.06940169 = sum of:
        0.044992477 = weight(_text_:wide in 2930) [ClassicSimilarity], result of:
          0.044992477 = score(doc=2930,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.342674 = fieldWeight in 2930, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2930)
        0.024409214 = weight(_text_:web in 2930) [ClassicSimilarity], result of:
          0.024409214 = score(doc=2930,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.25239927 = fieldWeight in 2930, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2930)
      0.14285715 = coord(2/14)
    
    Content
    Vortrag, "Semantics, Analytics, Visualisation: Enhancing Scholarly Data Workshop co-located with the 25th International World Wide Web Conference April 11, 2016 - Montreal, Canada", Montreal 2016.
  15. Hafner, R.; Schelling, B.: Automatisierung der Sacherschließung mit Semantic Web Technologie (2015) 0.01
    0.009650668 = product of:
      0.067554675 = sum of:
        0.048818428 = weight(_text_:web in 8365) [ClassicSimilarity], result of:
          0.048818428 = score(doc=8365,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.50479853 = fieldWeight in 8365, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.109375 = fieldNorm(doc=8365)
        0.018736245 = product of:
          0.056208733 = sum of:
            0.056208733 = weight(_text_:22 in 8365) [ClassicSimilarity], result of:
              0.056208733 = score(doc=8365,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.5416616 = fieldWeight in 8365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=8365)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Date
    22. 6.2015 16:08:38
  16. Guidi, F.; Sacerdoti Coen, C.: ¬A survey on retrieval of mathematical knowledge (2015) 0.01
    0.009324532 = product of:
      0.06527172 = sum of:
        0.05188869 = weight(_text_:retrieval in 5865) [ClassicSimilarity], result of:
          0.05188869 = score(doc=5865,freq=6.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.5788671 = fieldWeight in 5865, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.078125 = fieldNorm(doc=5865)
        0.013383033 = product of:
          0.040149096 = sum of:
            0.040149096 = weight(_text_:22 in 5865) [ClassicSimilarity], result of:
              0.040149096 = score(doc=5865,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.38690117 = fieldWeight in 5865, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5865)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    We present a short survey of the literature on indexing and retrieval of mathematical knowledge, with pointers to 72 papers and tentative taxonomies of both retrieval problems and recurring techniques.
    Date
    22. 2.2017 12:51:57
  17. Surfing versus Drilling for knowledge in science : When should you use your computer? When should you use your brain? (2018) 0.01
    0.009300158 = product of:
      0.04340074 = sum of:
        0.025709987 = weight(_text_:wide in 4564) [ClassicSimilarity], result of:
          0.025709987 = score(doc=4564,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.1958137 = fieldWeight in 4564, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=4564)
        0.005707573 = weight(_text_:information in 4564) [ClassicSimilarity], result of:
          0.005707573 = score(doc=4564,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.10971737 = fieldWeight in 4564, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=4564)
        0.0119831795 = weight(_text_:retrieval in 4564) [ClassicSimilarity], result of:
          0.0119831795 = score(doc=4564,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.13368362 = fieldWeight in 4564, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=4564)
      0.21428572 = coord(3/14)
    
    Abstract
    For this second Special Issue of Infozine, we have invited students, teachers, researchers, and software developers to share their opinions about one or the other aspect of this broad topic: how to balance drilling (for depth) vs. surfing (for breadth) in scientific learning, teaching, research, and software design - and how the modern digital-liberal system affects our ability to strike this balance. This special issue is meant to provide a wide and unbiased spectrum of possible viewpoints on the topic, helping readers to define lucidly their own position and information use behavior.
    Content
    Editorial: Surfing versus Drilling for Knowledge in Science: When should you use your computer? When should you use your brain? Blaise Pascal: Les deux infinis - The two infinities / Philippe Hünenberger and Oliver Renn - "Surfing" vs. "drilling" in the modern scientific world / Antonio Loprieno - Of millimeter paper and machine learning / Philippe Hünenberger - From one to many, from breadth to depth - industrializing research / Janne Soetbeer - "Deep drilling" requires "surfing" / Gerd Folkers and Laura Folkers - Surfing vs. drilling in science: A delicate balance / Alzbeta Kubincová - Digital trends in academia - for the sake of critical thinking or comfort? / Leif-Thore Deck - I diagnose, therefore I am a Doctor? Will drilling computer software replace human doctors in the future? / Yi Zheng - Surfing versus drilling in fundamental research / Wilfred van Gunsteren - Using brain vs. brute force in computational studies of biological systems / Arieh Warshel - Laboratory literature boards in the digital age / Jeffrey Bode - Research strategies in computational chemistry / Sereina Riniker - Surfing on the hype waves or drilling deep for knowledge? A perspective from industry / Nadine Schneider and Nikolaus Stiefl - The use and purpose of articles and scientists / Philip Mark Lund - Can you look at papers like artwork? / Oliver Renn - Dynamite fishing in the data swamp / Frank Perabo 34 Streetlights, augmented intelligence, and information discovery / Jeffrey Saffer and Vicki Burnett - "Yes Dave. Happy to do that for you." Why AI, machine learning, and blockchain will lead to deeper "drilling" / Michiel Kolman and Sjors de Heuvel - Trends in scientific document search ( Stefan Geißler - Power tools for text mining / Jane Reed 42 Publishing and patenting: Navigating the differences to ensure search success / Paul Peters
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  18. Vatant, B.: Porting library vocabularies to the Semantic Web, and back : a win-win round trip (2010) 0.01
    0.009130894 = product of:
      0.06391626 = sum of:
        0.055354897 = weight(_text_:web in 3968) [ClassicSimilarity], result of:
          0.055354897 = score(doc=3968,freq=14.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.57238775 = fieldWeight in 3968, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3968)
        0.00856136 = weight(_text_:information in 3968) [ClassicSimilarity], result of:
          0.00856136 = score(doc=3968,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.16457605 = fieldWeight in 3968, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3968)
      0.14285715 = coord(2/14)
    
    Abstract
    The role of vocabularies is critical in the long overdue synergy between the Web and Library heritage. The Semantic Web should leverage existing vocabularies instead of reinventing them, but the specific features of library vocabularies make them more or less portable to the Semantic Web. Based on preliminary results in the framework of the TELplus project, we suggest guidelines for needed evolutions in order to make vocabularies usable and efficient in the Semantic Web realm, assess choices made so far by large libraries to publish vocabularies conformant to standards and good practices, and review how Semantic Web tools can help managing those vocabularies.
    Content
    Vortrag im Rahmen der Session 93. Cataloguing der WORLD LIBRARY AND INFORMATION CONGRESS: 76TH IFLA GENERAL CONFERENCE AND ASSEMBLY, 10-15 August 2010, Gothenburg, Sweden - 149. Information Technology, Cataloguing, Classification and Indexing with Knowledge Management
    Theme
    Semantic Web
  19. Gómez-Pérez, A.; Corcho, O.: Ontology languages for the Semantic Web (2015) 0.01
    0.009124671 = product of:
      0.063872695 = sum of:
        0.055134792 = weight(_text_:web in 3297) [ClassicSimilarity], result of:
          0.055134792 = score(doc=3297,freq=20.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.5701118 = fieldWeight in 3297, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3297)
        0.008737902 = weight(_text_:information in 3297) [ClassicSimilarity], result of:
          0.008737902 = score(doc=3297,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.16796975 = fieldWeight in 3297, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3297)
      0.14285715 = coord(2/14)
    
    Abstract
    Ontologies have proven to be an essential element in many applications. They are used in agent systems, knowledge management systems, and e-commerce platforms. They can also generate natural language, integrate intelligent information, provide semantic-based access to the Internet, and extract information from texts in addition to being used in many other applications to explicitly declare the knowledge embedded in them. However, not only are ontologies useful for applications in which knowledge plays a key role, but they can also trigger a major change in current Web contents. This change is leading to the third generation of the Web-known as the Semantic Web-which has been defined as "the conceptual structuring of the Web in an explicit machine-readable way."1 This definition does not differ too much from the one used for defining an ontology: "An ontology is an explicit, machinereadable specification of a shared conceptualization."2 In fact, new ontology-based applications and knowledge architectures are developing for this new Web. A common claim for all of these approaches is the need for languages to represent the semantic information that this Web requires-solving the heterogeneous data exchange in this heterogeneous environment. Here, we don't decide which language is best of the Semantic Web. Rather, our goal is to help developers find the most suitable language for their representation needs. The authors analyze the most representative ontology languages created for the Web and compare them using a common framework.
    Theme
    Semantic Web
  20. Smiraglia, R.P.: Facets as discourse in knowledge organization : a case study in LISTA (2017) 0.01
    0.009107954 = product of:
      0.042503785 = sum of:
        0.017435152 = weight(_text_:web in 3855) [ClassicSimilarity], result of:
          0.017435152 = score(doc=3855,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.18028519 = fieldWeight in 3855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3855)
        0.010089659 = weight(_text_:information in 3855) [ClassicSimilarity], result of:
          0.010089659 = score(doc=3855,freq=8.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.19395474 = fieldWeight in 3855, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3855)
        0.014978974 = weight(_text_:retrieval in 3855) [ClassicSimilarity], result of:
          0.014978974 = score(doc=3855,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.16710453 = fieldWeight in 3855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3855)
      0.21428572 = coord(3/14)
    
    Abstract
    Knowledge Organization Systems (KOSs) use arrays of related concepts to capture the ontological content of a domain; hierarchical structures are typical of such systems. Some KOSs also employ sets of crossconceptual descriptors that express different dimensions within a domain-facets. The recent increase in the prominence of facets and faceted systems has had major impact on the intension of the KO domain and this is visible in the domain's literature. An interesting question is how the discourse surrounding facets in KO and in related domains such as information science might be described. The present paper reports one case study in an ongoing research project to investigate the discourse of facets in KO. In this particular case, the formal current research literature represented by inclusion in the "Library, Information Science & Technology Abstracts, Full Text" (LISTA) database is analyzed to discover aspects of the research front and its ongoing discourse concerning facets. A datasets of 1682 citations was analyzed. Results show thinking concerning information retrieval and the semantic web resides alongside implementation of faceted searching and the growth of faceted thesauri. Faceted classification remains important to the discourse, but the use of facet analysis is linked directly to applied aspects of information science.

Languages

  • e 192
  • d 145
  • i 4
  • f 2
  • a 1
  • el 1
  • es 1
  • More… Less…

Types

  • a 219
  • s 12
  • r 11
  • x 9
  • m 7
  • n 3
  • More… Less…