Search (112 results, page 1 of 6)

  • × theme_ss:"Semantic Web"
  • × type_ss:"a"
  • × year_i:[2000 TO 2010}
  1. Engels, R.H.P.; Lech, T.Ch.: Generating ontologies for the Semantic Web : OntoBuilder (2004) 0.05
    0.05006103 = product of:
      0.08343504 = sum of:
        0.009823184 = weight(_text_:a in 4404) [ClassicSimilarity], result of:
          0.009823184 = score(doc=4404,freq=26.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18373153 = fieldWeight in 4404, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=4404)
        0.06362687 = weight(_text_:91 in 4404) [ClassicSimilarity], result of:
          0.06362687 = score(doc=4404,freq=2.0), product of:
            0.25837386 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046368346 = queryNorm
            0.24625893 = fieldWeight in 4404, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.03125 = fieldNorm(doc=4404)
        0.009984984 = product of:
          0.019969968 = sum of:
            0.019969968 = weight(_text_:information in 4404) [ClassicSimilarity], result of:
              0.019969968 = score(doc=4404,freq=20.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.2453355 = fieldWeight in 4404, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4404)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Significant progress has been made in technologies for publishing and distributing knowledge and information on the web. However, much of the published information is not organized, and it is hard to find answers to questions that require more than a keyword search. In general, one can say that the web is organizing itself. Information is often published in relatively ad hoc fashion. Typically, concern about the presentation of content has been limited to purely layout issues. This, combined with the fact that the representation language used on the World Wide Web (HTML) is mainly format-oriented, makes publishing on the WWW easy, giving it an enormous expressiveness. People add private, educational or organizational content to the web that is of an immensely diverse nature. Content on the web is growing closer to a real universal knowledge base, with one problem relatively undefined; the problem of the interpretation of its contents. Although widely acknowledged for its general and universal advantages, the increasing popularity of the web also shows us some major drawbacks. The developments of the information content on the web during the last year alone, clearly indicates the need for some changes. Perhaps one of the most significant problems with the web as a distributed information system is the difficulty of finding and comparing information.
    Thus, there is a clear need for the web to become more semantic. The aim of introducing semantics into the web is to enhance the precision of search, but also enable the use of logical reasoning on web contents in order to answer queries. The CORPORUM OntoBuilder toolset is developed specifically for this task. It consists of a set of applications that can fulfil a variety of tasks, either as stand-alone tools, or augmenting each other. Important tasks that are dealt with by CORPORUM are related to document and information retrieval (find relevant documents, or support the user finding them), as well as information extraction (building a knowledge base from web documents to answer queries), information dissemination (summarizing strategies and information visualization), and automated document classification strategies. First versions of the toolset are encouraging in that they show large potential as a supportive technology for building up the Semantic Web. In this chapter, methods for transforming the current web into a semantic web are discussed, as well as a technical solution that can perform this task: the CORPORUM tool set. First, the toolset is introduced; followed by some pragmatic issues relating to the approach; then there will be a short overview of the theory in relation to CognIT's vision; and finally, a discussion on some of the applications that arose from the project.
    Pages
    S.91-115
    Type
    a
  2. Faaborg, A.; Lagoze, C.: Semantic browsing (2003) 0.03
    0.026682377 = product of:
      0.06670594 = sum of:
        0.011678694 = weight(_text_:a in 1026) [ClassicSimilarity], result of:
          0.011678694 = score(doc=1026,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.21843673 = fieldWeight in 1026, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1026)
        0.05502725 = sum of:
          0.011051352 = weight(_text_:information in 1026) [ClassicSimilarity], result of:
            0.011051352 = score(doc=1026,freq=2.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.13576832 = fieldWeight in 1026, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1026)
          0.043975897 = weight(_text_:22 in 1026) [ClassicSimilarity], result of:
            0.043975897 = score(doc=1026,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.2708308 = fieldWeight in 1026, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1026)
      0.4 = coord(2/5)
    
    Abstract
    We have created software applications that allow users to both author and use Semantic Web metadata. To create and use a layer of semantic content on top of the existing Web, we have (1) implemented a user interface that expedites the task of attributing metadata to resources on the Web, and (2) augmented a Web browser to leverage this semantic metadata to provide relevant information and tasks to the user. This project provides a framework for annotating and reorganizing existing files, pages, and sites on the Web that is similar to Vannevar Bushrsquos original concepts of trail blazing and associative indexing.
    Source
    Research and advanced technology for digital libraries : 7th European Conference, proceedings / ECDL 2003, Trondheim, Norway, August 17-22, 2003
    Type
    a
  3. Heflin, J.; Hendler, J.: Semantic interoperability on the Web (2000) 0.03
    0.025825147 = product of:
      0.064562865 = sum of:
        0.009535614 = weight(_text_:a in 759) [ClassicSimilarity], result of:
          0.009535614 = score(doc=759,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17835285 = fieldWeight in 759, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=759)
        0.05502725 = sum of:
          0.011051352 = weight(_text_:information in 759) [ClassicSimilarity], result of:
            0.011051352 = score(doc=759,freq=2.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.13576832 = fieldWeight in 759, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0546875 = fieldNorm(doc=759)
          0.043975897 = weight(_text_:22 in 759) [ClassicSimilarity], result of:
            0.043975897 = score(doc=759,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.2708308 = fieldWeight in 759, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=759)
      0.4 = coord(2/5)
    
    Abstract
    XML will have a profound impact on the way data is exchanged on the Internet. An important feature of this language is the separation of content from presentation, which makes it easier to select and/or reformat the data. However, due to the likelihood of numerous industry and domain specific DTDs, those who wish to integrate information will still be faced with the problem of semantic interoperability. In this paper we discuss why this problem is not solved by XML, and then discuss why the Resource Description Framework is only a partial solution. We then present the SHOE language, which we feel has many of the features necessary to enable a semantic web, and describe an existing set of tools that make it easy to use the language.
    Date
    11. 5.2013 19:22:18
    Type
    a
  4. Franklin, R.A.: Re-inventing subject access for the semantic web (2003) 0.02
    0.022870608 = product of:
      0.05717652 = sum of:
        0.0100103095 = weight(_text_:a in 2556) [ClassicSimilarity], result of:
          0.0100103095 = score(doc=2556,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18723148 = fieldWeight in 2556, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2556)
        0.04716621 = sum of:
          0.009472587 = weight(_text_:information in 2556) [ClassicSimilarity], result of:
            0.009472587 = score(doc=2556,freq=2.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.116372846 = fieldWeight in 2556, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046875 = fieldNorm(doc=2556)
          0.037693623 = weight(_text_:22 in 2556) [ClassicSimilarity], result of:
            0.037693623 = score(doc=2556,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.23214069 = fieldWeight in 2556, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2556)
      0.4 = coord(2/5)
    
    Abstract
    First generation scholarly research on the Web lacked a firm system of authority control. Second generation Web research is beginning to model subject access with library science principles of bibliographic control and cataloguing. Harnessing the Web and organising the intellectual content with standards and controlled vocabulary provides precise search and retrieval capability, increasing relevance and efficient use of technology. Dublin Core metadata standards permit a full evaluation and cataloguing of Web resources appropriate to highly specific research needs and discovery. Current research points to a type of structure based on a system of faceted classification. This system allows the semantic and syntactic relationships to be defined. Controlled vocabulary, such as the Library of Congress Subject Headings, can be assigned, not in a hierarchical structure, but rather as descriptive facets of relating concepts. Web design features such as this are adding value to discovery and filtering out data that lack authority. The system design allows for scalability and extensibility, two technical features that are integral to future development of the digital library and resource discovery.
    Date
    30.12.2008 18:22:46
    Source
    Online information review. 27(2003) no.2, S.94-101
    Type
    a
  5. Shoffner, M.; Greenberg, J.; Kramer-Duffield, J.; Woodbury, D.: Web 2.0 semantic systems : collaborative learning in science (2008) 0.02
    0.01837423 = product of:
      0.045935575 = sum of:
        0.011875651 = weight(_text_:a in 2661) [ClassicSimilarity], result of:
          0.011875651 = score(doc=2661,freq=38.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.22212058 = fieldWeight in 2661, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=2661)
        0.034059923 = sum of:
          0.0089308405 = weight(_text_:information in 2661) [ClassicSimilarity], result of:
            0.0089308405 = score(doc=2661,freq=4.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.10971737 = fieldWeight in 2661, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.03125 = fieldNorm(doc=2661)
          0.025129084 = weight(_text_:22 in 2661) [ClassicSimilarity], result of:
            0.025129084 = score(doc=2661,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.15476047 = fieldWeight in 2661, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2661)
      0.4 = coord(2/5)
    
    Abstract
    The basic goal of education within a discipline is to transform a novice into an expert. This entails moving the novice toward the "semantic space" that the expert inhabits-the space of concepts, meanings, vocabularies, and other intellectual constructs that comprise the discipline. Metadata is significant to this goal in digitally mediated education environments. Encoding the experts' semantic space not only enables the sharing of semantics among discipline scientists, but also creates an environment that bridges the semantic gap between the common vocabulary of the novice and the granular descriptive language of the seasoned scientist (Greenberg, et al, 2005). Developments underlying the Semantic Web, where vocabularies are formalized in the Web Ontology Language (OWL), and Web 2.0 approaches of user-generated folksonomies provide an infrastructure for linking vocabulary systems and promoting group learning via metadata literacy. Group learning is a pedagogical approach to teaching that harnesses the phenomenon of "collective intelligence" to increase learning by means of collaboration. Learning a new semantic system can be daunting for a novice, and yet it is integral to advance one's knowledge in a discipline and retain interest. These ideas are key to the "BOT 2.0: Botany through Web 2.0, the Memex and Social Learning" project (Bot 2.0).72 Bot 2.0 is a collaboration involving the North Carolina Botanical Garden, the UNC SILS Metadata Research center, and the Renaissance Computing Institute (RENCI). Bot 2.0 presents a curriculum utilizing a memex as a way for students to link and share digital information, working asynchronously in an environment beyond the traditional classroom. Our conception of a memex is not a centralized black box but rather a flexible, distributed framework that uses the most salient and easiest-to-use collaborative platforms (e.g., Facebook, Flickr, wiki and blog technology) for personal information management. By meeting students "where they live" digitally, we hope to attract students to the study of botanical science. A key aspect is to teach students scientific terminology and about the value of metadata, an inherent function in several of the technologies and in the instructional approach we are utilizing. This poster will report on a study examining the value of both folksonomies and taxonomies for post-secondary college students learning plant identification. Our data is drawn from a curriculum involving a virtual independent learning portion and a "BotCamp" weekend at UNC, where students work with digital plan specimens that they have captured. Results provide some insight into the importance of collaboration and shared vocabulary for gaining confidence and for student progression from novice to expert in botany.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
    Type
    a
  6. Subirats, I.; Prasad, A.R.D.; Keizer, J.; Bagdanov, A.: Implementation of rich metadata formats and demantic tools using DSpace (2008) 0.02
    0.01750921 = product of:
      0.043773025 = sum of:
        0.00770594 = weight(_text_:a in 2656) [ClassicSimilarity], result of:
          0.00770594 = score(doc=2656,freq=16.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.14413087 = fieldWeight in 2656, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=2656)
        0.036067087 = sum of:
          0.010938003 = weight(_text_:information in 2656) [ClassicSimilarity], result of:
            0.010938003 = score(doc=2656,freq=6.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.1343758 = fieldWeight in 2656, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.03125 = fieldNorm(doc=2656)
          0.025129084 = weight(_text_:22 in 2656) [ClassicSimilarity], result of:
            0.025129084 = score(doc=2656,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.15476047 = fieldWeight in 2656, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2656)
      0.4 = coord(2/5)
    
    Abstract
    This poster explores the customization of DSpace to allow the use of the AGRIS Application Profile metadata standard and the AGROVOC thesaurus. The objective is the adaptation of DSpace, through the least invasive code changes either in the form of plug-ins or add-ons, to the specific needs of the Agricultural Sciences and Technology community. Metadata standards such as AGRIS AP, and Knowledge Organization Systems such as the AGROVOC thesaurus, provide mechanisms for sharing information in a standardized manner by recommending the use of common semantics and interoperable syntax (Subirats et al., 2007). AGRIS AP was created to enhance the description, exchange and subsequent retrieval of agricultural Document-like Information Objects (DLIOs). It is a metadata schema which draws from Metadata standards such as Dublin Core (DC), the Australian Government Locator Service Metadata (AGLS) and the Agricultural Metadata Element Set (AgMES) namespaces. It allows sharing of information across dispersed bibliographic systems (FAO, 2005). AGROVOC68 is a multilingual structured thesaurus covering agricultural and related domains. Its main role is to standardize the indexing process in order to make searching simpler and more efficient. AGROVOC is developed by FAO (Lauser et al., 2006). The customization of the DSpace is taking place in several phases. First, the AGRIS AP metadata schema was mapped onto the metadata DSpace model, with several enhancements implemented to support AGRIS AP elements. Next, AGROVOC will be integrated as a controlled vocabulary accessed through a local SKOS or OWL file. Eventually the system will be configurable to access AGROVOC through local files or remotely via webservices. Finally, spell checking and tooltips will be incorporated in the user interface to support metadata editing. Adapting DSpace to support AGRIS AP and annotation using the semantically-rich AGROVOC thesaurus transform DSpace into a powerful, domain-specific system for annotation and exchange of bibliographic metadata in the agricultural domain.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
    Type
    a
  7. Shaw, R.; Buckland, M.: Open identification and linking of the four Ws (2008) 0.02
    0.015224208 = product of:
      0.03806052 = sum of:
        0.008258085 = weight(_text_:a in 2665) [ClassicSimilarity], result of:
          0.008258085 = score(doc=2665,freq=24.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1544581 = fieldWeight in 2665, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2665)
        0.029802434 = sum of:
          0.007814486 = weight(_text_:information in 2665) [ClassicSimilarity], result of:
            0.007814486 = score(doc=2665,freq=4.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.0960027 = fieldWeight in 2665, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.02734375 = fieldNorm(doc=2665)
          0.021987949 = weight(_text_:22 in 2665) [ClassicSimilarity], result of:
            0.021987949 = score(doc=2665,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.1354154 = fieldWeight in 2665, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=2665)
      0.4 = coord(2/5)
    
    Abstract
    Platforms for social computing connect users via shared references to people with whom they have relationships, events attended, places lived in or traveled to, and topics such as favorite books or movies. Since free text is insufficient for expressing such references precisely and unambiguously, many social computing platforms coin identifiers for topics, places, events, and people and provide interfaces for finding and selecting these identifiers from controlled lists. Using these interfaces, users collaboratively construct a web of links among entities. This model needn't be limited to social networking sites. Understanding an item in a digital library or museum requires context: information about the topics, places, events, and people to which the item is related. Students, journalists and investigators traditionally discover this kind of context by asking "the four Ws": what, where, when and who. The DCMI Kernel Metadata Community has recognized the four Ws as fundamental elements of descriptions (Kunze & Turner, 2007). Making better use of metadata to answer these questions via links to appropriate contextual resources has been our focus in a series of research projects over the past few years. Currently we are building a system for enabling readers of any text to relate any topic, place, event or person mentioned in the text to the best explanatory resources available. This system is being developed with two different corpora: a diverse variety of biographical texts characterized by very rich and dense mentions of people, events, places and activities, and a large collection of newly-scanned books, journals and manuscripts relating to Irish culture and history. Like a social computing platform, our system consists of tools for referring to topics, places, events or people, disambiguating these references by linking them to unique identifiers, and using the disambiguated references to provide useful information in context and to link to related resources. Yet current social computing platforms, while usually amenable to importing and exporting data, tend to mint proprietary identifiers and expect links to be traversed using their own interfaces. We take a different approach, using identifiers from both established and emerging naming authorities, representing relationships using standardized metadata vocabularies, and publishing those representations using standard protocols so that links can be stored and traversed anywhere. Central to our strategy is to move from appearances in a text to naming authorities to the the construction of links for searching or querying trusted resources. Using identifiers from naming authorities, rather than literal values (as in the DCMI Kernel) or keys from a proprietary database, makes it more likely that links constructed using our system will continue to be useful in the future. WorldCat Identities URIs (http://worldcat.org/identities/) linked to Library of Congress and Deutsche Nationalbibliothek authority files for persons and organizations and Geonames (http://geonames.org/) URIs for places are stable identifiers attached to a wealth of useful metadata. Yet no naming authority can be totally comprehensive, so our system can be extended to use new sources of identifiers as needed. For example, we are experimenting with using Freebase (http://freebase.com/) URIs to identify historical events, for which no established naming authority currently exists. Stable identifiers (URIs), standardized hyperlinked data formats (XML), and uniform publishing protocols (HTTP) are key ingredients of the web's open architecture. Our system provides an example of how this open architecture can be exploited to build flexible and useful tools for connecting resources via shared references to topics, places, events, and people.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
    Type
    a
  8. Synak, M.; Dabrowski, M.; Kruk, S.R.: Semantic Web and ontologies (2009) 0.01
    0.012231203 = product of:
      0.030578006 = sum of:
        0.005448922 = weight(_text_:a in 3376) [ClassicSimilarity], result of:
          0.005448922 = score(doc=3376,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10191591 = fieldWeight in 3376, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=3376)
        0.025129084 = product of:
          0.050258167 = sum of:
            0.050258167 = weight(_text_:22 in 3376) [ClassicSimilarity], result of:
              0.050258167 = score(doc=3376,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.30952093 = fieldWeight in 3376, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3376)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    31. 7.2010 16:58:22
    Type
    a
  9. Malmsten, M.: Making a library catalogue part of the Semantic Web (2008) 0.01
    0.012098414 = product of:
      0.030246034 = sum of:
        0.008258085 = weight(_text_:a in 2640) [ClassicSimilarity], result of:
          0.008258085 = score(doc=2640,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1544581 = fieldWeight in 2640, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2640)
        0.021987949 = product of:
          0.043975897 = sum of:
            0.043975897 = weight(_text_:22 in 2640) [ClassicSimilarity], result of:
              0.043975897 = score(doc=2640,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.2708308 = fieldWeight in 2640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2640)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Library catalogues contain an enormous amount of structured, high-quality data, however, this data is generally not made available to semantic web applications. In this paper we describe the tools and techniques used to make the Swedish Union Catalogue (LIBRIS) part of the Semantic Web and Linked Data. The focus is on links to and between resources and the mechanisms used to make data available, rather than perfect description of the individual resources. We also present a method of creating links between records of the same work.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
    Type
    a
  10. Blumauer, A.; Pellegrini, T.: Semantic Web Revisited : Eine kurze Einführung in das Social Semantic Web (2009) 0.01
    0.012098414 = product of:
      0.030246034 = sum of:
        0.008258085 = weight(_text_:a in 4855) [ClassicSimilarity], result of:
          0.008258085 = score(doc=4855,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1544581 = fieldWeight in 4855, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4855)
        0.021987949 = product of:
          0.043975897 = sum of:
            0.043975897 = weight(_text_:22 in 4855) [ClassicSimilarity], result of:
              0.043975897 = score(doc=4855,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.2708308 = fieldWeight in 4855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4855)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Pages
    S.3-22
    Source
    Social Semantic Web: Web 2.0, was nun? Hrsg.: A. Blumauer u. T. Pellegrini
    Type
    a
  11. Schneider, R.: Web 3.0 ante portas? : Integration von Social Web und Semantic Web (2008) 0.01
    0.011492259 = product of:
      0.028730646 = sum of:
        0.0067426977 = weight(_text_:a in 4184) [ClassicSimilarity], result of:
          0.0067426977 = score(doc=4184,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12611452 = fieldWeight in 4184, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4184)
        0.021987949 = product of:
          0.043975897 = sum of:
            0.043975897 = weight(_text_:22 in 4184) [ClassicSimilarity], result of:
              0.043975897 = score(doc=4184,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.2708308 = fieldWeight in 4184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4184)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    22. 1.2011 10:38:28
    Source
    Kommunikation, Partizipation und Wirkungen im Social Web, Band 1. Hrsg.: A. Zerfaß u.a
    Type
    a
  12. Gendt, M. van; Isaac, I.; Meij, L. van der; Schlobach, S.: Semantic Web techniques for multiple views on heterogeneous collections : a case study (2006) 0.01
    0.010808079 = product of:
      0.027020195 = sum of:
        0.008173384 = weight(_text_:a in 2418) [ClassicSimilarity], result of:
          0.008173384 = score(doc=2418,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15287387 = fieldWeight in 2418, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2418)
        0.018846812 = product of:
          0.037693623 = sum of:
            0.037693623 = weight(_text_:22 in 2418) [ClassicSimilarity], result of:
              0.037693623 = score(doc=2418,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.23214069 = fieldWeight in 2418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2418)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Integrated digital access to multiple collections is a prominent issue for many Cultural Heritage institutions. The metadata describing diverse collections must be interoperable, which requires aligning the controlled vocabularies that are used to annotate objects from these collections. In this paper, we present an experiment where we match the vocabularies of two collections by applying the Knowledge Representation techniques established in recent Semantic Web research. We discuss the steps that are required for such matching, namely formalising the initial resources using Semantic Web languages, and running ontology mapping tools on the resulting representations. In addition, we present a prototype that enables the user to browse the two collections using the obtained alignment while still providing her with the original vocabulary structures.
    Source
    Research and advanced technology for digital libraries : 10th European conference, proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006
    Type
    a
  13. Zeng, M.L.; Fan, W.; Lin, X.: SKOS for an integrated vocabulary structure (2008) 0.01
    0.010376932 = product of:
      0.02594233 = sum of:
        0.008173384 = weight(_text_:a in 2654) [ClassicSimilarity], result of:
          0.008173384 = score(doc=2654,freq=18.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15287387 = fieldWeight in 2654, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=2654)
        0.017768946 = product of:
          0.03553789 = sum of:
            0.03553789 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
              0.03553789 = score(doc=2654,freq=4.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.21886435 = fieldWeight in 2654, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2654)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    In order to transfer the Chinese Classified Thesaurus (CCT) into a machine-processable format and provide CCT-based Web services, a pilot study has been conducted in which a variety of selected CCT classes and mapped thesaurus entries are encoded with SKOS. OWL and RDFS are also used to encode the same contents for the purposes of feasibility and cost-benefit comparison. CCT is a collected effort led by the National Library of China. It is an integration of the national standards Chinese Library Classification (CLC) 4th edition and Chinese Thesaurus (CT). As a manually created mapping product, CCT provides for each of the classes the corresponding thesaurus terms, and vice versa. The coverage of CCT includes four major clusters: philosophy, social sciences and humanities, natural sciences and technologies, and general works. There are 22 main-classes, 52,992 sub-classes and divisions, 110,837 preferred thesaurus terms, 35,690 entry terms (non-preferred terms), and 59,738 pre-coordinated headings (Chinese Classified Thesaurus, 2005) Major challenges of encoding this large vocabulary comes from its integrated structure. CCT is a result of the combination of two structures (illustrated in Figure 1): a thesaurus that uses ISO-2788 standardized structure and a classification scheme that is basically enumerative, but provides some flexibility for several kinds of synthetic mechanisms Other challenges include the complex relationships caused by differences of granularities of two original schemes and their presentation with various levels of SKOS elements; as well as the diverse coordination of entries due to the use of auxiliary tables and pre-coordinated headings derived from combining classes, subdivisions, and thesaurus terms, which do not correspond to existing unique identifiers. The poster reports the progress, shares the sample SKOS entries, and summarizes problems identified during the SKOS encoding process. Although OWL Lite and OWL Full provide richer expressiveness, the cost-benefit issues and the final purposes of encoding CCT raise questions of using such approaches.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
    Type
    a
  14. Ding, Y.: ¬A review of ontologies with the Semantic Web in view (2001) 0.01
    0.009814699 = product of:
      0.024536747 = sum of:
        0.013485395 = weight(_text_:a in 4152) [ClassicSimilarity], result of:
          0.013485395 = score(doc=4152,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.25222903 = fieldWeight in 4152, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=4152)
        0.011051352 = product of:
          0.022102704 = sum of:
            0.022102704 = weight(_text_:information in 4152) [ClassicSimilarity], result of:
              0.022102704 = score(doc=4152,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.27153665 = fieldWeight in 4152, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4152)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Source
    Journal of information science. 27(2001) no.?, S.377-384
    Type
    a
  15. Voss, J.: LibraryThing : Web 2.0 für Literaturfreunde und Bibliotheken (2007) 0.01
    0.00854215 = product of:
      0.021355376 = sum of:
        0.0017027882 = weight(_text_:a in 1847) [ClassicSimilarity], result of:
          0.0017027882 = score(doc=1847,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.03184872 = fieldWeight in 1847, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1847)
        0.019652588 = sum of:
          0.003946911 = weight(_text_:information in 1847) [ClassicSimilarity], result of:
            0.003946911 = score(doc=1847,freq=2.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.048488684 = fieldWeight in 1847, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.01953125 = fieldNorm(doc=1847)
          0.015705677 = weight(_text_:22 in 1847) [ClassicSimilarity], result of:
            0.015705677 = score(doc=1847,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.09672529 = fieldWeight in 1847, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.01953125 = fieldNorm(doc=1847)
      0.4 = coord(2/5)
    
    Content
    Zusammenarbeit mit Bibliotheken Bereits früh setzte sich Tim Spalding für eine Zusammenarbeit mit Bibliotheken ein. Zum Eintragen von neuen Büchern in LibraryThing können zahlreiche Bibliothekskataloge ausgewählt werden, die via Z39.50 eingebunden werden - seit Oktober 2006 ist auch der GBV dabei. Im April 2007 veröffentlichte Tim Spalding mit LibraryThing for Libraries ein Reihe von Webservices, die Bibliotheken in ihre OPACs einbinden können.4 Ein Webservice ist eine Funktion, die von anderen Programmen über das Web aufgerufen werden kann und Daten zurückliefert. Bereits seit Juni 2006 können über verschiedene offene LibraryThing-Webservices unter Anderem zu einer gegebenen ISBN die Sprache und eine Liste von ISBNs anderer Auflagen und Übersetzungen ermittelt werden, die zum gleichen Werk gehören (thinglSBN). Damit setzt LibraryThing praktisch einen Teil der Functional Requirements for Bibliographic Records (FRBR) um, die in bibliothekswissenschaftlichen Fachkreisen bereits seit Anfang der 1990er diskutiert werden, aber bislang nicht so Recht ihre Umsetzung in Katalogen gefunden haben. Die Information darüber, welche Bücher zum gleichen Werk gehören, wird von der LibraryThing-Community bereitgestellt; jeder Benutzer kann einzelne Ausgaben mit einem Klick zusammenführen oder wieder trennen. Vergleiche mit dem ähnlichen Dienst xISBN von OCLC zeigen, dass sich thinglSBN und xISBN gut ergänzen, allerdings bietet LibraryThing seinen Webservice im Gegensatz zu OCLC kostenlos an. Neben Empfehlungen von verwandten Büchern ist es im Rahmen von LibraryThing for Libraries auch möglich, die von den Nutzern vergebenen Tags in den eigenen Katalog einzubinden. Ein Nachteil dabei ist allerdings die bisherige Übermacht der englischen Sprache und dass nur selbständige Titel mit ISBN berücksichtigt werden. Die VZG prüft derzeit, in welcher Form LibraryThing for Libraries am besten in GBV-Bibliotheken umgesetzt werden kann. Es spricht allerdings für jede einzelne Bibliothek nichts dagegen, schon jetzt damit zu experimentieren, wie der eigene OPAC mit zusätzlichen Links und Tags von LibraryThing aussehen könnte. Darüber hinaus können sich auch Bibliotheken mit einem eigenen Zugang als Nutzer in LibraryThing beteiligen. So stellt beispielsweise die Stadtbücherei Nordenham bereits seit Ende 2005 ihre Neuzugänge im Erwachsenenbestand in einer Sammlung bei LibraryThing ein.
    Date
    22. 9.2007 10:36:23
    Type
    a
  16. Davies, J.; Weeks, R.; Krohn, U.: QuizRDF: search technology for the Semantic Web (2004) 0.01
    0.00816826 = product of:
      0.020420648 = sum of:
        0.009036016 = weight(_text_:a in 4406) [ClassicSimilarity], result of:
          0.009036016 = score(doc=4406,freq=22.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.16900843 = fieldWeight in 4406, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=4406)
        0.011384632 = product of:
          0.022769265 = sum of:
            0.022769265 = weight(_text_:information in 4406) [ClassicSimilarity], result of:
              0.022769265 = score(doc=4406,freq=26.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.2797255 = fieldWeight in 4406, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4406)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Important information is often scattered across Web and/or intranet resources. Traditional search engines return ranked retrieval lists that offer little or no information on the semantic relationships among documents. Knowledge workers spend a substantial amount of their time browsing and reading to find out how documents are related to one another and where each falls into the overall structure of the problem domain. Yet only when knowledge workers begin to locate the similarities and differences among pieces of information do they move into an essential part of their work: building relationships to create new knowledge. Information retrieval traditionally focuses on the relationship between a given query (or user profile) and the information store. On the other hand, exploitation of interrelationships between selected pieces of information (which can be facilitated by the use of ontologies) can put otherwise isolated information into a meaningful context. The implicit structures so revealed help users use and manage information more efficiently. Knowledge management tools are needed that integrate the resources dispersed across Web resources into a coherent corpus of interrelated information. Previous research in information integration has largely focused on integrating heterogeneous databases and knowledge bases, which represent information in a highly structured way, often by means of formal languages. In contrast, the Web consists to a large extent of unstructured or semi-structured natural language texts. As we have seen, ontologies offer an alternative way to cope with heterogeneous representations of Web resources. The domain model implicit in an ontology can be taken as a unifying structure for giving information a common representation and semantics. Once such a unifying structure exists, it can be exploited to improve browsing and retrieval performance in information access tools. QuizRDF is an example of such a tool.
    Type
    a
  17. Scheir, P.; Pammer, V.; Lindstaedt, S.N.: Information retrieval on the Semantic Web : does it exist? (2007) 0.01
    0.007639394 = product of:
      0.019098485 = sum of:
        0.0067426977 = weight(_text_:a in 4329) [ClassicSimilarity], result of:
          0.0067426977 = score(doc=4329,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12611452 = fieldWeight in 4329, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4329)
        0.012355788 = product of:
          0.024711575 = sum of:
            0.024711575 = weight(_text_:information in 4329) [ClassicSimilarity], result of:
              0.024711575 = score(doc=4329,freq=10.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.3035872 = fieldWeight in 4329, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4329)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Plenty of contemporary attempts to search exist that are associated with the area of Semantic Web. But which of them qualify as information retrieval for the Semantic Web? Do such approaches exist? To answer these questions we take a look at the nature of the Semantic Web and Semantic Desktop and at definitions for information and data retrieval. We survey current approaches referred to by their authors as information retrieval for the Semantic Web or that use Semantic Web technology for search.
    Source
    Lernen - Wissen - Adaption : workshop proceedings / LWA 2007, Halle, September 2007. Martin Luther University Halle-Wittenberg, Institute for Informatics, Databases and Information Systems. Hrsg.: Alexander Hinneburg
    Type
    a
  18. Krause, J.: Shell Model, Semantic Web and Web Information Retrieval (2006) 0.01
    0.0075114607 = product of:
      0.018778652 = sum of:
        0.0076151006 = weight(_text_:a in 6061) [ClassicSimilarity], result of:
          0.0076151006 = score(doc=6061,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.14243183 = fieldWeight in 6061, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6061)
        0.011163551 = product of:
          0.022327103 = sum of:
            0.022327103 = weight(_text_:information in 6061) [ClassicSimilarity], result of:
              0.022327103 = score(doc=6061,freq=16.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.27429342 = fieldWeight in 6061, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6061)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The middle of the 1990s are coined by the increased enthusiasm for the possibilities of the WWW, which has only recently deviated - at least in relation to scientific information - for the differentiated measuring of its advantages and disadvantages. Web Information Retrieval originated as a specialized discipline with great commercial significance (for an overview see Lewandowski 2005). Besides the new technological structure that enables the indexing and searching (in seconds) of unimaginable amounts of data worldwide, new assessment processes for the ranking of search results are being developed, which use the link structures of the Web. They are the main innovation with respect to the traditional "mother discipline" of Information Retrieval. From the beginning, link structures of Web pages are applied to commercial search engines in a wide array of variations. From the perspective of scientific information, link topology based approaches were in essence trying to solve a self-created problem: on the one hand, it quickly became clear that the openness of the Web led to an up-tonow unknown increase in available information, but this also caused the quality of the Web pages searched to become a problem - and with it the relevance of the results. The gatekeeper function of traditional information providers, which narrows down every user query to focus on high-quality sources was lacking. Therefore, the recognition of the "authoritativeness" of the Web pages by general search engines such as Google was one of the most important factors for their success.
    Source
    Information und Sprache: Beiträge zu Informationswissenschaft, Computerlinguistik, Bibliothekswesen und verwandten Fächern. Festschrift für Harald H. Zimmermann. Herausgegeben von Ilse Harms, Heinz-Dirk Luckhardt und Hans W. Giessen
    Type
    a
  19. Davies, J.; Duke, A.; Stonkus, A.: OntoShare: evolving ontologies in a knowledge sharing system (2004) 0.01
    0.007361024 = product of:
      0.01840256 = sum of:
        0.010114046 = weight(_text_:a in 4409) [ClassicSimilarity], result of:
          0.010114046 = score(doc=4409,freq=36.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18917176 = fieldWeight in 4409, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4409)
        0.008288514 = product of:
          0.016577028 = sum of:
            0.016577028 = weight(_text_:information in 4409) [ClassicSimilarity], result of:
              0.016577028 = score(doc=4409,freq=18.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.20365247 = fieldWeight in 4409, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4409)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    We saw in the introduction how the Semantic Web makes possible a new generation of knowledge management tools. We now turn our attention more specifically to Semantic Web based support for virtual communities of practice. The notion of communities of practice has attracted much attention in the field of knowledge management. Communities of practice are groups within (or sometimes across) organizations who share a common set of information needs or problems. They are typically not a formal organizational unit but an informal network, each sharing in part a common agenda and shared interests or issues. In one example it was found that a lot of knowledge sharing among copier engineers took place through informal exchanges, often around a water cooler. As well as local, geographically based communities, trends towards flexible working and globalisation have led to interest in supporting dispersed communities using Internet technology. The challenge for organizations is to support such communities and make them effective. Provided with an ontology meeting the needs of a particular community of practice, knowledge management tools can arrange knowledge assets into the predefined conceptual classes of the ontology, allowing more natural and intuitive access to knowledge. Knowledge management tools must give users the ability to organize information into a controllable asset. Building an intranet-based store of information is not sufficient for knowledge management; the relationships within the stored information are vital. These relationships cover such diverse issues as relative importance, context, sequence, significance, causality and association. The potential for knowledge management tools is vast; not only can they make better use of the raw information already available, but they can sift, abstract and help to share new information, and present it to users in new and compelling ways.
    In this chapter, we describe the OntoShare system which facilitates and encourages the sharing of information between communities of practice within (or perhaps across) organizations and which encourages people - who may not previously have known of each other's existence in a large organization - to make contact where there are mutual concerns or interests. As users contribute information to the community, a knowledge resource annotated with meta-data is created. Ontologies defined using the resource description framework (RDF) and RDF Schema (RDFS) are used in this process. RDF is a W3C recommendation for the formulation of meta-data for WWW resources. RDF(S) extends this standard with the means to specify domain vocabulary and object structures - that is, concepts and the relationships that hold between them. In the next section, we describe in detail the way in which OntoShare can be used to share and retrieve knowledge and how that knowledge is represented in an RDF-based ontology. We then proceed to discuss in Section 10.3 how the ontologies in OntoShare evolve over time based on user interaction with the system and motivate our approach to user-based creation of RDF-annotated information resources. The way in which OntoShare can help to locate expertise within an organization is then described, followed by a discussion of the sociotechnical issues of deploying such a tool. Finally, a planned evaluation exercise and avenues for further research are outlined.
    Type
    a
  20. Severiens, T.; Thiemann, C.: RDF database for PhysNet and similar portals (2006) 0.01
    0.0073474604 = product of:
      0.01836865 = sum of:
        0.009437811 = weight(_text_:a in 245) [ClassicSimilarity], result of:
          0.009437811 = score(doc=245,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17652355 = fieldWeight in 245, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=245)
        0.0089308405 = product of:
          0.017861681 = sum of:
            0.017861681 = weight(_text_:information in 245) [ClassicSimilarity], result of:
              0.017861681 = score(doc=245,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.21943474 = fieldWeight in 245, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=245)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    PhysNet (www.physnet.net) is a portal for Physics run since 1995 and continuously being developed; it today uses an OWLLite ontology and mySQL database for storing triples with the facts, such as department information, postal addresses, GPS coordinates, URLs of publication repositories, etc. The article focuses on the structure and the development of the underlying ontology; it also gives a detailed overview of an online web-based editorial tool, to maintain the facts database.
    Theme
    Information Gateway
    Type
    a

Languages

  • e 72
  • d 39