-
Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005)
0.04
0.042315744 = sum of:
0.03657513 = product of:
0.14630052 = sum of:
0.14630052 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
0.14630052 = score(doc=701,freq=2.0), product of:
0.39046928 = queryWeight, product of:
8.478011 = idf(docFreq=24, maxDocs=44218)
0.046056706 = queryNorm
0.3746787 = fieldWeight in 701, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
8.478011 = idf(docFreq=24, maxDocs=44218)
0.03125 = fieldNorm(doc=701)
0.25 = coord(1/4)
0.005740611 = product of:
0.011481222 = sum of:
0.011481222 = weight(_text_:a in 701) [ClassicSimilarity], result of:
0.011481222 = score(doc=701,freq=36.0), product of:
0.053105544 = queryWeight, product of:
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046056706 = queryNorm
0.2161963 = fieldWeight in 701, product of:
6.0 = tf(freq=36.0), with freq of:
36.0 = termFreq=36.0
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.03125 = fieldNorm(doc=701)
0.5 = coord(1/2)
- Abstract
- By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
- Content
- Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
-
Papadakis, I. et al.: Highlighting timely information in libraries through social and semantic Web technologies (2016)
0.03
0.034582928 = product of:
0.069165856 = sum of:
0.069165856 = sum of:
0.006765375 = weight(_text_:a in 2090) [ClassicSimilarity], result of:
0.006765375 = score(doc=2090,freq=2.0), product of:
0.053105544 = queryWeight, product of:
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046056706 = queryNorm
0.12739488 = fieldWeight in 2090, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.078125 = fieldNorm(doc=2090)
0.06240048 = weight(_text_:22 in 2090) [ClassicSimilarity], result of:
0.06240048 = score(doc=2090,freq=2.0), product of:
0.16128273 = queryWeight, product of:
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.046056706 = queryNorm
0.38690117 = fieldWeight in 2090, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.078125 = fieldNorm(doc=2090)
0.5 = coord(1/2)
- Source
- Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
- Type
- a
-
Synak, M.; Dabrowski, M.; Kruk, S.R.: Semantic Web and ontologies (2009)
0.03
0.02766634 = product of:
0.05533268 = sum of:
0.05533268 = sum of:
0.0054123 = weight(_text_:a in 3376) [ClassicSimilarity], result of:
0.0054123 = score(doc=3376,freq=2.0), product of:
0.053105544 = queryWeight, product of:
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046056706 = queryNorm
0.10191591 = fieldWeight in 3376, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.0625 = fieldNorm(doc=3376)
0.04992038 = weight(_text_:22 in 3376) [ClassicSimilarity], result of:
0.04992038 = score(doc=3376,freq=2.0), product of:
0.16128273 = queryWeight, product of:
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.046056706 = queryNorm
0.30952093 = fieldWeight in 3376, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.0625 = fieldNorm(doc=3376)
0.5 = coord(1/2)
- Date
- 31. 7.2010 16:58:22
- Type
- a
-
Faaborg, A.; Lagoze, C.: Semantic browsing (2003)
0.03
0.027640268 = product of:
0.055280536 = sum of:
0.055280536 = sum of:
0.011600202 = weight(_text_:a in 1026) [ClassicSimilarity], result of:
0.011600202 = score(doc=1026,freq=12.0), product of:
0.053105544 = queryWeight, product of:
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046056706 = queryNorm
0.21843673 = fieldWeight in 1026, product of:
3.4641016 = tf(freq=12.0), with freq of:
12.0 = termFreq=12.0
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.0546875 = fieldNorm(doc=1026)
0.043680333 = weight(_text_:22 in 1026) [ClassicSimilarity], result of:
0.043680333 = score(doc=1026,freq=2.0), product of:
0.16128273 = queryWeight, product of:
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.046056706 = queryNorm
0.2708308 = fieldWeight in 1026, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.0546875 = fieldNorm(doc=1026)
0.5 = coord(1/2)
- Abstract
- We have created software applications that allow users to both author and use Semantic Web metadata. To create and use a layer of semantic content on top of the existing Web, we have (1) implemented a user interface that expedites the task of attributing metadata to resources on the Web, and (2) augmented a Web browser to leverage this semantic metadata to provide relevant information and tasks to the user. This project provides a framework for annotating and reorganizing existing files, pages, and sites on the Web that is similar to Vannevar Bushrsquos original concepts of trail blazing and associative indexing.
- Source
- Research and advanced technology for digital libraries : 7th European Conference, proceedings / ECDL 2003, Trondheim, Norway, August 17-22, 2003
- Type
- a
-
Heflin, J.; Hendler, J.: Semantic interoperability on the Web (2000)
0.03
0.026575929 = product of:
0.053151857 = sum of:
0.053151857 = sum of:
0.009471525 = weight(_text_:a in 759) [ClassicSimilarity], result of:
0.009471525 = score(doc=759,freq=8.0), product of:
0.053105544 = queryWeight, product of:
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046056706 = queryNorm
0.17835285 = fieldWeight in 759, product of:
2.828427 = tf(freq=8.0), with freq of:
8.0 = termFreq=8.0
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.0546875 = fieldNorm(doc=759)
0.043680333 = weight(_text_:22 in 759) [ClassicSimilarity], result of:
0.043680333 = score(doc=759,freq=2.0), product of:
0.16128273 = queryWeight, product of:
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.046056706 = queryNorm
0.2708308 = fieldWeight in 759, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.0546875 = fieldNorm(doc=759)
0.5 = coord(1/2)
- Abstract
- XML will have a profound impact on the way data is exchanged on the Internet. An important feature of this language is the separation of content from presentation, which makes it easier to select and/or reformat the data. However, due to the likelihood of numerous industry and domain specific DTDs, those who wish to integrate information will still be faced with the problem of semantic interoperability. In this paper we discuss why this problem is not solved by XML, and then discuss why the Resource Description Framework is only a partial solution. We then present the SHOE language, which we feel has many of the features necessary to enable a semantic web, and describe an existing set of tools that make it easy to use the language.
- Date
- 11. 5.2013 19:22:18
- Type
- a
-
Malmsten, M.: Making a library catalogue part of the Semantic Web (2008)
0.03
0.025941458 = product of:
0.051882915 = sum of:
0.051882915 = sum of:
0.008202582 = weight(_text_:a in 2640) [ClassicSimilarity], result of:
0.008202582 = score(doc=2640,freq=6.0), product of:
0.053105544 = queryWeight, product of:
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046056706 = queryNorm
0.1544581 = fieldWeight in 2640, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.0546875 = fieldNorm(doc=2640)
0.043680333 = weight(_text_:22 in 2640) [ClassicSimilarity], result of:
0.043680333 = score(doc=2640,freq=2.0), product of:
0.16128273 = queryWeight, product of:
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.046056706 = queryNorm
0.2708308 = fieldWeight in 2640, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.0546875 = fieldNorm(doc=2640)
0.5 = coord(1/2)
- Abstract
- Library catalogues contain an enormous amount of structured, high-quality data, however, this data is generally not made available to semantic web applications. In this paper we describe the tools and techniques used to make the Swedish Union Catalogue (LIBRIS) part of the Semantic Web and Linked Data. The focus is on links to and between resources and the mechanisms used to make data available, rather than perfect description of the individual resources. We also present a method of creating links between records of the same work.
- Source
- Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
- Type
- a
-
Blumauer, A.; Pellegrini, T.: Semantic Web Revisited : Eine kurze Einführung in das Social Semantic Web (2009)
0.03
0.025941458 = product of:
0.051882915 = sum of:
0.051882915 = sum of:
0.008202582 = weight(_text_:a in 4855) [ClassicSimilarity], result of:
0.008202582 = score(doc=4855,freq=6.0), product of:
0.053105544 = queryWeight, product of:
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046056706 = queryNorm
0.1544581 = fieldWeight in 4855, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.0546875 = fieldNorm(doc=4855)
0.043680333 = weight(_text_:22 in 4855) [ClassicSimilarity], result of:
0.043680333 = score(doc=4855,freq=2.0), product of:
0.16128273 = queryWeight, product of:
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.046056706 = queryNorm
0.2708308 = fieldWeight in 4855, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.0546875 = fieldNorm(doc=4855)
0.5 = coord(1/2)
- Pages
- S.3-22
- Source
- Social Semantic Web: Web 2.0, was nun? Hrsg.: A. Blumauer u. T. Pellegrini
- Type
- a
-
Schneider, R.: Web 3.0 ante portas? : Integration von Social Web und Semantic Web (2008)
0.03
0.025188856 = product of:
0.05037771 = sum of:
0.05037771 = sum of:
0.00669738 = weight(_text_:a in 4184) [ClassicSimilarity], result of:
0.00669738 = score(doc=4184,freq=4.0), product of:
0.053105544 = queryWeight, product of:
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046056706 = queryNorm
0.12611452 = fieldWeight in 4184, product of:
2.0 = tf(freq=4.0), with freq of:
4.0 = termFreq=4.0
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.0546875 = fieldNorm(doc=4184)
0.043680333 = weight(_text_:22 in 4184) [ClassicSimilarity], result of:
0.043680333 = score(doc=4184,freq=2.0), product of:
0.16128273 = queryWeight, product of:
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.046056706 = queryNorm
0.2708308 = fieldWeight in 4184, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.0546875 = fieldNorm(doc=4184)
0.5 = coord(1/2)
- Date
- 22. 1.2011 10:38:28
- Source
- Kommunikation, Partizipation und Wirkungen im Social Web, Band 1. Hrsg.: A. Zerfaß u.a
- Type
- a
-
Franklin, R.A.: Re-inventing subject access for the semantic web (2003)
0.02
0.023691658 = product of:
0.047383316 = sum of:
0.047383316 = sum of:
0.00994303 = weight(_text_:a in 2556) [ClassicSimilarity], result of:
0.00994303 = score(doc=2556,freq=12.0), product of:
0.053105544 = queryWeight, product of:
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046056706 = queryNorm
0.18723148 = fieldWeight in 2556, product of:
3.4641016 = tf(freq=12.0), with freq of:
12.0 = termFreq=12.0
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046875 = fieldNorm(doc=2556)
0.037440285 = weight(_text_:22 in 2556) [ClassicSimilarity], result of:
0.037440285 = score(doc=2556,freq=2.0), product of:
0.16128273 = queryWeight, product of:
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.046056706 = queryNorm
0.23214069 = fieldWeight in 2556, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.046875 = fieldNorm(doc=2556)
0.5 = coord(1/2)
- Abstract
- First generation scholarly research on the Web lacked a firm system of authority control. Second generation Web research is beginning to model subject access with library science principles of bibliographic control and cataloguing. Harnessing the Web and organising the intellectual content with standards and controlled vocabulary provides precise search and retrieval capability, increasing relevance and efficient use of technology. Dublin Core metadata standards permit a full evaluation and cataloguing of Web resources appropriate to highly specific research needs and discovery. Current research points to a type of structure based on a system of faceted classification. This system allows the semantic and syntactic relationships to be defined. Controlled vocabulary, such as the Library of Congress Subject Headings, can be assigned, not in a hierarchical structure, but rather as descriptive facets of relating concepts. Web design features such as this are adding value to discovery and filtering out data that lack authority. The system design allows for scalability and extensibility, two technical features that are integral to future development of the digital library and resource discovery.
- Date
- 30.12.2008 18:22:46
- Type
- a
-
Hooland, S. van; Verborgh, R.; Wilde, M. De; Hercher, J.; Mannens, E.; Wa, R.Van de: Evaluating the success of vocabulary reconciliation for cultural heritage collections (2013)
0.02
0.023258494 = product of:
0.04651699 = sum of:
0.04651699 = sum of:
0.009076704 = weight(_text_:a in 662) [ClassicSimilarity], result of:
0.009076704 = score(doc=662,freq=10.0), product of:
0.053105544 = queryWeight, product of:
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046056706 = queryNorm
0.1709182 = fieldWeight in 662, product of:
3.1622777 = tf(freq=10.0), with freq of:
10.0 = termFreq=10.0
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046875 = fieldNorm(doc=662)
0.037440285 = weight(_text_:22 in 662) [ClassicSimilarity], result of:
0.037440285 = score(doc=662,freq=2.0), product of:
0.16128273 = queryWeight, product of:
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.046056706 = queryNorm
0.23214069 = fieldWeight in 662, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.046875 = fieldNorm(doc=662)
0.5 = coord(1/2)
- Abstract
- The concept of Linked Data has made its entrance in the cultural heritage sector due to its potential use for the integration of heterogeneous collections and deriving additional value out of existing metadata. However, practitioners and researchers alike need a better understanding of what outcome they can reasonably expect of the reconciliation process between their local metadata and established controlled vocabularies which are already a part of the Linked Data cloud. This paper offers an in-depth analysis of how a locally developed vocabulary can be successfully reconciled with the Library of Congress Subject Headings (LCSH) and the Arts and Architecture Thesaurus (AAT) through the help of a general-purpose tool for interactive data transformation (OpenRefine). Issues negatively affecting the reconciliation process are identified and solutions are proposed in order to derive maximum value from existing metadata and controlled vocabularies in an automated manner.
- Date
- 22. 3.2013 19:29:20
- Type
- a
-
Gendt, M. van; Isaac, I.; Meij, L. van der; Schlobach, S.: Semantic Web techniques for multiple views on heterogeneous collections : a case study (2006)
0.02
0.022779368 = product of:
0.045558736 = sum of:
0.045558736 = sum of:
0.008118451 = weight(_text_:a in 2418) [ClassicSimilarity], result of:
0.008118451 = score(doc=2418,freq=8.0), product of:
0.053105544 = queryWeight, product of:
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046056706 = queryNorm
0.15287387 = fieldWeight in 2418, product of:
2.828427 = tf(freq=8.0), with freq of:
8.0 = termFreq=8.0
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046875 = fieldNorm(doc=2418)
0.037440285 = weight(_text_:22 in 2418) [ClassicSimilarity], result of:
0.037440285 = score(doc=2418,freq=2.0), product of:
0.16128273 = queryWeight, product of:
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.046056706 = queryNorm
0.23214069 = fieldWeight in 2418, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.046875 = fieldNorm(doc=2418)
0.5 = coord(1/2)
- Abstract
- Integrated digital access to multiple collections is a prominent issue for many Cultural Heritage institutions. The metadata describing diverse collections must be interoperable, which requires aligning the controlled vocabularies that are used to annotate objects from these collections. In this paper, we present an experiment where we match the vocabularies of two collections by applying the Knowledge Representation techniques established in recent Semantic Web research. We discuss the steps that are required for such matching, namely formalising the initial resources using Semantic Web languages, and running ontology mapping tools on the resulting representations. In addition, we present a prototype that enables the user to browse the two collections using the obtained alignment while still providing her with the original vocabulary structures.
- Source
- Research and advanced technology for digital libraries : 10th European conference, proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006
- Type
- a
-
Prud'hommeaux, E.; Gayo, E.: RDF ventures to boldly meet your most pedestrian needs (2015)
0.02
0.022235535 = product of:
0.04447107 = sum of:
0.04447107 = sum of:
0.007030784 = weight(_text_:a in 2024) [ClassicSimilarity], result of:
0.007030784 = score(doc=2024,freq=6.0), product of:
0.053105544 = queryWeight, product of:
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046056706 = queryNorm
0.13239266 = fieldWeight in 2024, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046875 = fieldNorm(doc=2024)
0.037440285 = weight(_text_:22 in 2024) [ClassicSimilarity], result of:
0.037440285 = score(doc=2024,freq=2.0), product of:
0.16128273 = queryWeight, product of:
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.046056706 = queryNorm
0.23214069 = fieldWeight in 2024, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.046875 = fieldNorm(doc=2024)
0.5 = coord(1/2)
- Abstract
- Defined in 1999 and paired with XML, the Resource Description Framework (RDF) has been cast as an RDF Schema, producing data that is well-structured but not validated, permitting certain illogical relationships. When stakeholders convened in 2014 to consider solutions to the data validation challenge, a W3C working group proposed Resource Shapes and Shape Expressions to describe the properties expected for an RDF node. Resistance rose from concerns about data and schema reuse, key principles in RDF. Ideally data types and properties are designed for broad use, but they are increasingly adopted with local restrictions for specific purposes. Resource Shapes are commonly treated as record classes, standing in for data structures but losing flexibility for later reuse. Of various solutions to the resulting tensions, the concept of record classes may be the most reasonable basis for agreement, satisfying stakeholders' objectives while allowing for variations with constraints.
- Footnote
- Contribution to a special section "Linked data and the charm of weak semantics".
- Source
- Bulletin of the Association for Information Science and Technology. 41(2015) no.4, S.18-22
- Type
- a
-
Dextre Clarke, S.G.: Challenges and opportunities for KOS standards (2007)
0.02
0.021840166 = product of:
0.043680333 = sum of:
0.043680333 = product of:
0.087360665 = sum of:
0.087360665 = weight(_text_:22 in 4643) [ClassicSimilarity], result of:
0.087360665 = score(doc=4643,freq=2.0), product of:
0.16128273 = queryWeight, product of:
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.046056706 = queryNorm
0.5416616 = fieldWeight in 4643, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.109375 = fieldNorm(doc=4643)
0.5 = coord(1/2)
0.5 = coord(1/2)
- Date
- 22. 9.2007 15:41:14
-
Zeng, M.L.; Fan, W.; Lin, X.: SKOS for an integrated vocabulary structure (2008)
0.02
0.021708746 = product of:
0.04341749 = sum of:
0.04341749 = sum of:
0.008118451 = weight(_text_:a in 2654) [ClassicSimilarity], result of:
0.008118451 = score(doc=2654,freq=18.0), product of:
0.053105544 = queryWeight, product of:
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046056706 = queryNorm
0.15287387 = fieldWeight in 2654, product of:
4.2426405 = tf(freq=18.0), with freq of:
18.0 = termFreq=18.0
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.03125 = fieldNorm(doc=2654)
0.03529904 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
0.03529904 = score(doc=2654,freq=4.0), product of:
0.16128273 = queryWeight, product of:
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.046056706 = queryNorm
0.21886435 = fieldWeight in 2654, product of:
2.0 = tf(freq=4.0), with freq of:
4.0 = termFreq=4.0
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.03125 = fieldNorm(doc=2654)
0.5 = coord(1/2)
- Abstract
- In order to transfer the Chinese Classified Thesaurus (CCT) into a machine-processable format and provide CCT-based Web services, a pilot study has been conducted in which a variety of selected CCT classes and mapped thesaurus entries are encoded with SKOS. OWL and RDFS are also used to encode the same contents for the purposes of feasibility and cost-benefit comparison. CCT is a collected effort led by the National Library of China. It is an integration of the national standards Chinese Library Classification (CLC) 4th edition and Chinese Thesaurus (CT). As a manually created mapping product, CCT provides for each of the classes the corresponding thesaurus terms, and vice versa. The coverage of CCT includes four major clusters: philosophy, social sciences and humanities, natural sciences and technologies, and general works. There are 22 main-classes, 52,992 sub-classes and divisions, 110,837 preferred thesaurus terms, 35,690 entry terms (non-preferred terms), and 59,738 pre-coordinated headings (Chinese Classified Thesaurus, 2005) Major challenges of encoding this large vocabulary comes from its integrated structure. CCT is a result of the combination of two structures (illustrated in Figure 1): a thesaurus that uses ISO-2788 standardized structure and a classification scheme that is basically enumerative, but provides some flexibility for several kinds of synthetic mechanisms Other challenges include the complex relationships caused by differences of granularities of two original schemes and their presentation with various levels of SKOS elements; as well as the diverse coordination of entries due to the use of auxiliary tables and pre-coordinated headings derived from combining classes, subdivisions, and thesaurus terms, which do not correspond to existing unique identifiers. The poster reports the progress, shares the sample SKOS entries, and summarizes problems identified during the SKOS encoding process. Although OWL Lite and OWL Full provide richer expressiveness, the cost-benefit issues and the final purposes of encoding CCT raise questions of using such approaches.
- Source
- Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
- Type
- a
-
Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010)
0.02
0.021590449 = product of:
0.043180898 = sum of:
0.043180898 = sum of:
0.005740611 = weight(_text_:a in 4649) [ClassicSimilarity], result of:
0.005740611 = score(doc=4649,freq=4.0), product of:
0.053105544 = queryWeight, product of:
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046056706 = queryNorm
0.10809815 = fieldWeight in 4649, product of:
2.0 = tf(freq=4.0), with freq of:
4.0 = termFreq=4.0
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046875 = fieldNorm(doc=4649)
0.037440285 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
0.037440285 = score(doc=4649,freq=2.0), product of:
0.16128273 = queryWeight, product of:
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.046056706 = queryNorm
0.23214069 = fieldWeight in 4649, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.046875 = fieldNorm(doc=4649)
0.5 = coord(1/2)
- Abstract
- More and more cultural heritage institutions publish their collections, vocabularies and metadata on the Web. The resulting Web of linked cultural data opens up exciting new possibilities for searching and browsing through these cultural heritage collections. We report on ongoing work in which we investigate the estimation of relevance in this Web of Culture. We study existing measures of semantic distance and how they apply to two use cases. The use cases relate to the structured, multilingual and multimodal nature of the Culture Web. We distinguish between measures using the Web, such as Google distance and PMI, and measures using the Linked Data Web, i.e. the semantic structure of metadata vocabularies. We perform a small study in which we compare these semantic distance measures to human judgements of relevance. Although it is too early to draw any definitive conclusions, the study provides new insights into the applicability of semantic distance measures to the Web of Culture, and clear starting points for further research.
- Date
- 26.12.2011 13:40:22
-
Keyser, P. de: Indexing : from thesauri to the Semantic Web (2012)
0.02
0.020749755 = product of:
0.04149951 = sum of:
0.04149951 = sum of:
0.0040592253 = weight(_text_:a in 3197) [ClassicSimilarity], result of:
0.0040592253 = score(doc=3197,freq=2.0), product of:
0.053105544 = queryWeight, product of:
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046056706 = queryNorm
0.07643694 = fieldWeight in 3197, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046875 = fieldNorm(doc=3197)
0.037440285 = weight(_text_:22 in 3197) [ClassicSimilarity], result of:
0.037440285 = score(doc=3197,freq=2.0), product of:
0.16128273 = queryWeight, product of:
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.046056706 = queryNorm
0.23214069 = fieldWeight in 3197, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.046875 = fieldNorm(doc=3197)
0.5 = coord(1/2)
- Abstract
- Indexing consists of both novel and more traditional techniques. Cutting-edge indexing techniques, such as automatic indexing, ontologies, and topic maps, were developed independently of older techniques such as thesauri, but it is now recognized that these older methods also hold expertise. Indexing describes various traditional and novel indexing techniques, giving information professionals and students of library and information sciences a broad and comprehensible introduction to indexing. This title consists of twelve chapters: an Introduction to subject readings and theasauri; Automatic indexing versus manual indexing; Techniques applied in automatic indexing of text material; Automatic indexing of images; The black art of indexing moving images; Automatic indexing of music; Taxonomies and ontologies; Metadata formats and indexing; Tagging; Topic maps; Indexing the web; and The Semantic Web.
- Date
- 24. 8.2016 14:03:22
-
Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018)
0.02
0.018982807 = product of:
0.037965614 = sum of:
0.037965614 = sum of:
0.006765375 = weight(_text_:a in 4553) [ClassicSimilarity], result of:
0.006765375 = score(doc=4553,freq=8.0), product of:
0.053105544 = queryWeight, product of:
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046056706 = queryNorm
0.12739488 = fieldWeight in 4553, product of:
2.828427 = tf(freq=8.0), with freq of:
8.0 = termFreq=8.0
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.0390625 = fieldNorm(doc=4553)
0.03120024 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
0.03120024 = score(doc=4553,freq=2.0), product of:
0.16128273 = queryWeight, product of:
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.046056706 = queryNorm
0.19345059 = fieldWeight in 4553, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.0390625 = fieldNorm(doc=4553)
0.5 = coord(1/2)
- Abstract
- Semantic Web knowledge representation standards, and in particular RDF and OWL, often come endowed with a formal semantics which is considered to be of fundamental importance for the field. Reasoning, i.e., the drawing of logical inferences from knowledge expressed in such standards, is traditionally based on logical deductive methods and algorithms which can be proven to be sound and complete and terminating, i.e. correct in a very strong sense. For various reasons, though, in particular the scalability issues arising from the ever increasing amounts of Semantic Web data available and the inability of deductive algorithms to deal with noise in the data, it has been argued that alternative means of reasoning should be investigated which bear high promise for high scalability and better robustness. From this perspective, deductive algorithms can be considered the gold standard regarding correctness against which alternative methods need to be tested. In this paper, we show that it is possible to train a Deep Learning system on RDF knowledge graphs, such that it is able to perform reasoning over new RDF knowledge graphs, with high precision and recall compared to the deductive gold standard.
- Date
- 16.11.2018 14:22:01
- Type
- a
-
Metadata and semantics research : 7th Research Conference, MTSR 2013 Thessaloniki, Greece, November 19-22, 2013. Proceedings (2013)
0.02
0.01879202 = product of:
0.03758404 = sum of:
0.03758404 = sum of:
0.00669738 = weight(_text_:a in 1155) [ClassicSimilarity], result of:
0.00669738 = score(doc=1155,freq=16.0), product of:
0.053105544 = queryWeight, product of:
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046056706 = queryNorm
0.12611452 = fieldWeight in 1155, product of:
4.0 = tf(freq=16.0), with freq of:
16.0 = termFreq=16.0
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.02734375 = fieldNorm(doc=1155)
0.030886661 = weight(_text_:22 in 1155) [ClassicSimilarity], result of:
0.030886661 = score(doc=1155,freq=4.0), product of:
0.16128273 = queryWeight, product of:
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.046056706 = queryNorm
0.19150631 = fieldWeight in 1155, product of:
2.0 = tf(freq=4.0), with freq of:
4.0 = termFreq=4.0
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.02734375 = fieldNorm(doc=1155)
0.5 = coord(1/2)
- Abstract
- Metadata and semantics are integral to any information system and significant to the sphere of Web data. Research focusing on metadata and semantics is crucial for advancing our understanding and knowledge of metadata; and, more profoundly for being able to effectively discover, use, archive, and repurpose information. In response to this need, researchers are actively examining methods for generating, reusing, and interchanging metadata. Integrated with these developments is research on the application of computational methods, linked data, and data analytics. A growing body of work also targets conceptual and theoretical designs providing foundational frameworks for metadata and semantic applications. There is no doubt that metadata weaves its way into nearly every aspect of our information ecosystem, and there is great motivation for advancing the current state of metadata and semantics. To this end, it is vital that scholars and practitioners convene and share their work.
The MTSR 2013 program and the contents of these proceedings show a rich diversity of research and practices, drawing on problems from metadata and semantically focused tools and technologies, linked data, cross-language semantics, ontologies, metadata models, and semantic system and metadata standards. The general session of the conference included 18 papers covering a broad spectrum of topics, proving the interdisciplinary field of metadata, and was divided into three main themes: platforms for research data sets, system architecture and data management; metadata and ontology validation, evaluation, mapping and interoperability; and content management. Metadata as a research topic is maturing, and the conference also supported the following five tracks: Metadata and Semantics for Open Repositories, Research Information Systems and Data Infrastructures; Metadata and Semantics for Cultural Collections and Applications; Metadata and Semantics for Agriculture, Food and Environment; Big Data and Digital Libraries in Health, Science and Technology; and European and National Projects, and Project Networking. Each track had a rich selection of papers, giving broader diversity to MTSR, and enabling deeper exploration of significant topics.
All the papers underwent a thorough and rigorous peer-review process. The review and selection this year was highly competitive and only papers containing significant research results, innovative methods, or novel and best practices were accepted for publication. Only 29 of 89 submissions were accepted as full papers, representing 32.5% of the total number of submissions. Additional contributions covering noteworthy and important results in special tracks or project reports were accepted, totaling 42 accepted contributions. This year's conference included two outstanding keynote speakers. Dr. Stefan Gradmann, a professor arts department of KU Leuven (Belgium) and director of university library, addressed semantic research drawing from his work with Europeana. The title of his presentation was, "Towards a Semantic Research Library: Digital Humanities Research, Europeana and the Linked Data Paradigm". Dr. Michail Salampasis, associate professor from our conference host institution, the Department of Informatics of the Alexander TEI of Thessaloniki, presented new potential, intersecting search and linked data. The title of his talk was, "Rethinking the Search Experience: What Could Professional Search Systems Do Better?"
- Date
- 17.12.2013 12:51:22
-
Broughton, V.: Automatic metadata generation : Digital resource description without human intervention (2007)
0.02
0.018720143 = product of:
0.037440285 = sum of:
0.037440285 = product of:
0.07488057 = sum of:
0.07488057 = weight(_text_:22 in 6048) [ClassicSimilarity], result of:
0.07488057 = score(doc=6048,freq=2.0), product of:
0.16128273 = queryWeight, product of:
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.046056706 = queryNorm
0.46428138 = fieldWeight in 6048, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.09375 = fieldNorm(doc=6048)
0.5 = coord(1/2)
0.5 = coord(1/2)
- Date
- 22. 9.2007 15:41:14
-
Tudhope, D.: Knowledge Organization System Services : brief review of NKOS activities and possibility of KOS registries (2007)
0.02
0.018720143 = product of:
0.037440285 = sum of:
0.037440285 = product of:
0.07488057 = sum of:
0.07488057 = weight(_text_:22 in 100) [ClassicSimilarity], result of:
0.07488057 = score(doc=100,freq=2.0), product of:
0.16128273 = queryWeight, product of:
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.046056706 = queryNorm
0.46428138 = fieldWeight in 100, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.09375 = fieldNorm(doc=100)
0.5 = coord(1/2)
0.5 = coord(1/2)
- Date
- 22. 9.2007 15:41:14