Search (672 results, page 2 of 34)

  • × type_ss:"el"
  1. Auer, S.; Bizer, C.; Kobilarov, G.; Lehmann, J.; Cyganiak, R.; Ives, Z.: DBpedia: a nucleus for a Web of open data (2007) 0.05
    0.046163693 = product of:
      0.13849108 = sum of:
        0.09916721 = weight(_text_:web in 4260) [ClassicSimilarity], result of:
          0.09916721 = score(doc=4260,freq=20.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.6841342 = fieldWeight in 4260, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4260)
        0.039323866 = weight(_text_:computer in 4260) [ClassicSimilarity], result of:
          0.039323866 = score(doc=4260,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.24226204 = fieldWeight in 4260, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=4260)
      0.33333334 = coord(2/6)
    
    Abstract
    DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against datasets derived from Wikipedia and to link other datasets on the Web to Wikipedia data. We describe the extraction of the DBpedia datasets, and how the resulting information is published on the Web for human- and machineconsumption. We describe some emerging applications from the DBpedia community and show how website authors can facilitate DBpedia content within their sites. Finally, we present the current status of interlinking DBpedia with other open datasets on the Web and outline how DBpedia could serve as a nucleus for an emerging Web of open data.
    Series
    Lecture notes in computer science ; 4825
    Source
    ¬The Semantic Web : 6th International Semantic Web Conference, 2nd Asian Semantic Web Conference, ISWC 2007 + ASWC 2007, Busan, Korea, November 11-15, 2007 : proceedings. Ed.: Karl Aberer et al
    Theme
    Semantic Web
  2. RDF Primer : W3C Recommendation 10 February 2004 (2004) 0.05
    0.045401078 = product of:
      0.13620323 = sum of:
        0.07707134 = weight(_text_:wide in 3064) [ClassicSimilarity], result of:
          0.07707134 = score(doc=3064,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.3916274 = fieldWeight in 3064, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0625 = fieldNorm(doc=3064)
        0.059131898 = weight(_text_:web in 3064) [ClassicSimilarity], result of:
          0.059131898 = score(doc=3064,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.4079388 = fieldWeight in 3064, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=3064)
      0.33333334 = coord(2/6)
    
    Abstract
    The Resource Description Framework (RDF) is a language for representing information about resources in the World Wide Web. This Primer is designed to provide the reader with the basic knowledge required to effectively use RDF. It introduces the basic concepts of RDF and describes its XML syntax. It describes how to define RDF vocabularies using the RDF Vocabulary Description Language, and gives an overview of some deployed RDF applications. It also describes the content and purpose of other RDF specification documents.
    Theme
    Semantic Web
  3. Dushay, N.: Visualizing bibliographic metadata : a virtual (book) spine viewer (2004) 0.04
    0.044273444 = product of:
      0.08854689 = sum of:
        0.028901752 = weight(_text_:wide in 1197) [ClassicSimilarity], result of:
          0.028901752 = score(doc=1197,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.14686027 = fieldWeight in 1197, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1197)
        0.015679711 = weight(_text_:web in 1197) [ClassicSimilarity], result of:
          0.015679711 = score(doc=1197,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.108171105 = fieldWeight in 1197, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1197)
        0.043965418 = weight(_text_:computer in 1197) [ClassicSimilarity], result of:
          0.043965418 = score(doc=1197,freq=10.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.2708572 = fieldWeight in 1197, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1197)
      0.5 = coord(3/6)
    
    Abstract
    User interfaces for digital information discovery often require users to click around and read a lot of text in order to find the text they want to read-a process that is often frustrating and tedious. This is exacerbated because of the limited amount of text that can be displayed on a computer screen. To improve the user experience of computer mediated information discovery, information visualization techniques are applied to the digital library context, while retaining traditional information organization concepts. In this article, the "virtual (book) spine" and the virtual spine viewer are introduced. The virtual spine viewer is an application which allows users to visually explore large information spaces or collections while also allowing users to hone in on individual resources of interest. The virtual spine viewer introduced here is an alpha prototype, presented to promote discussion and further work. Information discovery changed radically with the introduction of computerized library access catalogs, the World Wide Web and its search engines, and online bookstores. Yet few instances of these technologies provide a user experience analogous to walking among well-organized, well-stocked bookshelves-which many people find useful as well as pleasurable. To put it another way, many of us have heard or voiced complaints about the paucity of "online browsing"-but what does this really mean? In traditional information spaces such as libraries, often we can move freely among the books and other resources. When we walk among organized, labeled bookshelves, we get a sense of the information space-we take in clues, perhaps unconsciously, as to the scope of the collection, the currency of resources, the frequency of their use, etc. We also enjoy unexpected discoveries such as finding an interesting resource because library staff deliberately located it near similar resources, or because it was miss-shelved, or because we saw it on a bookshelf on the way to the water fountain.
    When our experience of information discovery is mediated by a computer, we neither move ourselves nor the monitor. We have only the computer's monitor to view, and the keyboard and/or mouse to manipulate what is displayed there. Computer interfaces often reduce our ability to get a sense of the contents of a library: we don't perceive the scope of the library: its breadth, (the quantity of materials/information), its density (how full the shelves are, how thorough the collection is for individual topics), or the general audience for the materials (e.g., whether the materials are appropriate for middle school students, college professors, etc.). Additionally, many computer interfaces for information discovery require users to scroll through long lists, to click numerous navigational links and to read a lot of text to find the exact text they want to read. Text features of resources are almost always presented alphabetically, and the number of items in these alphabetical lists sometimes can be very long. Alphabetical ordering is certainly an improvement over no ordering, but it generally has no bearing on features with an inherent non-alphabetical ordering (e.g., dates of historical events), nor does it necessarily group similar items together. Alphabetical ordering of resources is analogous to one of the most familiar complaints about dictionaries: sometimes you need to know how to spell a word in order to look up its correct spelling in the dictionary. Some have used technology to replicate the appearance of physical libraries, presenting rooms of bookcases and shelves of book spines in virtual 3D environments. This approach presents a problem, as few book spines can be displayed legibly on a monitor screen. This article examines the role of book spines, call numbers, and other traditional organizational and information discovery concepts, and integrates this knowledge with information visualization techniques to show how computers and monitors can meet or exceed similar information discovery methods. The goal is to tap the unique potentials of current information visualization approaches in order to improve information discovery, offer new services, and most important of all, improve user satisfaction. We need to capitalize on what computers do well while bearing in mind their limitations. The intent is to design GUIs to optimize utility and provide a positive experience for the user.
  4. Peters, C.; Picchi, E.: Across languages, across cultures : issues in multilinguality and digital libraries (1997) 0.04
    0.04316772 = product of:
      0.12950316 = sum of:
        0.07707134 = weight(_text_:wide in 1233) [ClassicSimilarity], result of:
          0.07707134 = score(doc=1233,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.3916274 = fieldWeight in 1233, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0625 = fieldNorm(doc=1233)
        0.05243182 = weight(_text_:computer in 1233) [ClassicSimilarity], result of:
          0.05243182 = score(doc=1233,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.32301605 = fieldWeight in 1233, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0625 = fieldNorm(doc=1233)
      0.33333334 = coord(2/6)
    
    Abstract
    With the recent rapid diffusion over the international computer networks of world-wide distributed document bases, the question of multilingual access and multilingual information retrieval is becoming increasingly relevant. We briefly discuss just some of the issues that must be addressed in order to implement a multilingual interface for a Digital Library system and describe our own approach to this problem.
  5. Denton, W.: Putting facets on the Web : an annotated bibliography (2003) 0.04
    0.042866588 = product of:
      0.085733175 = sum of:
        0.024084795 = weight(_text_:wide in 2467) [ClassicSimilarity], result of:
          0.024084795 = score(doc=2467,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.122383565 = fieldWeight in 2467, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2467)
        0.045263432 = weight(_text_:web in 2467) [ClassicSimilarity], result of:
          0.045263432 = score(doc=2467,freq=24.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.3122631 = fieldWeight in 2467, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2467)
        0.016384944 = weight(_text_:computer in 2467) [ClassicSimilarity], result of:
          0.016384944 = score(doc=2467,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.100942515 = fieldWeight in 2467, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2467)
      0.5 = coord(3/6)
    
    Abstract
    This is a classified, annotated bibliography about how to design faceted classification systems and make them usable on the World Wide Web. It is the first of three works I will be doing. The second, based on the material here and elsewhere, will discuss how to actually make the faceted system and put it online. The third will be a report of how I did just that, what worked, what didn't, and what I learned. Almost every article or book listed here begins with an explanation of what a faceted classification system is, so I won't (but see Steckel in Background below if you don't already know). They all agree that faceted systems are very appropriate for the web. Even pre-web articles (such as Duncan's in Background, below) assert that hypertext and facets will go together well. Combined, it is possible to take a set of documents and classify them or apply subject headings to describe what they are about, then build a navigational structure so that any user, no matter how he or she approaches the material, no matter what his or her goals, can move and search in a way that makes sense to them, but still get to the same useful results as someone else following a different path to the same goal. There is no one way that everyone will always use when looking for information. The more flexible the organization of the information, the more accommodating it is. Facets are more flexible for hypertext browsing than any enumerative or hierarchical system.
    Consider movie listings in newspapers. Most Canadian newspapers list movie showtimes in two large blocks, for the two major theatre chains. The listings are ordered by region (in large cities), then theatre, then movie, and finally by showtime. Anyone wondering where and when a particular movie is playing must scan the complete listings. Determining what movies are playing in the next half hour is very difficult. When movie listings went onto the web, most sites used a simple faceted organization, always with movie name and theatre, and perhaps with region or neighbourhood (thankfully, theatre chains were left out). They make it easy to pick a theatre and see what movies are playing there, or to pick a movie and see what theatres are showing it. To complete the system, the sites should allow users to browse by neighbourhood and showtime, and to order the results in any way they desired. Thus could people easily find answers to such questions as, "Where is the new James Bond movie playing?" "What's showing at the Roxy tonight?" "I'm going to be out in in Little Finland this afternoon with three hours to kill starting at 2 ... is anything interesting playing?" A hypertext, faceted classification system makes more useful information more easily available to the user. Reading the books and articles below in chronological order will show a certain progression: suggestions that faceting and hypertext might work well, confidence that facets would work well if only someone would make such a system, and finally the beginning of serious work on actually designing, building, and testing faceted web sites. There is a solid basis of how to make faceted classifications (see Vickery in Recommended), but their application online is just starting. Work on XFML (see Van Dijck's work in Recommended) the Exchangeable Faceted Metadata Language, will make this easier. If it follows previous patterns, parts of the Internet community will embrace the idea and make open source software available for others to reuse. It will be particularly beneficial if professionals in both information studies and computer science can work together to build working systems, standards, and code. Each can benefit from the other's expertise in what can be a very complicated and technical area. One particularly nice thing about this area of research is that people interested in combining facets and the web often have web sites where they post their writings.
    This bibliography is not meant to be exhaustive, but unfortunately it is not as complete as I wanted. Some books and articles are not be included, but they may be used in my future work. (These include two books and one article by B.C. Vickery: Faceted Classification Schemes (New Brunswick, NJ: Rutgers, 1966), Classification and Indexing in Science, 3rd ed. (London: Butterworths, 1975), and "Knowledge Representation: A Brief Review" (Journal of Documentation 42 no. 3 (September 1986): 145-159; and A.C. Foskett's "The Future of Faceted Classification" in The Future of Classification, edited by Rita Marcella and Arthur Maltby (Aldershot, England: Gower, 2000): 69-80). Nevertheless, I hope this bibliography will be useful for those both new to or familiar with faceted hypertext systems. Some very basic resources are listed, as well as some very advanced ones. Some example web sites are mentioned, but there is no detailed technical discussion of any software. The user interface to any web site is extremely important, and this is briefly mentioned in two or three places (for example the discussion of lawforwa.org (see Example Web Sites)). The larger question of how to display information graphically and with hypertext is outside the scope of this bibliography. There are five sections: Recommended, Background, Not Relevant, Example Web Sites, and Mailing Lists. Background material is either introductory, advanced, or of peripheral interest, and can be read after the Recommended resources if the reader wants to know more. The Not Relevant category contains articles that may appear in bibliographies but are not relevant for my purposes.
  6. Hüsken, P.: Information Retrieval im Semantic Web (2006) 0.04
    0.04264177 = product of:
      0.1279253 = sum of:
        0.057803504 = weight(_text_:wide in 4333) [ClassicSimilarity], result of:
          0.057803504 = score(doc=4333,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.29372054 = fieldWeight in 4333, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=4333)
        0.07012181 = weight(_text_:web in 4333) [ClassicSimilarity], result of:
          0.07012181 = score(doc=4333,freq=10.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.48375595 = fieldWeight in 4333, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4333)
      0.33333334 = coord(2/6)
    
    Abstract
    Das Semantic Web bezeichnet ein erweitertes World Wide Web (WWW), das die Bedeutung von präsentierten Inhalten in neuen standardisierten Sprachen wie RDF Schema und OWL modelliert. Diese Arbeit befasst sich mit dem Aspekt des Information Retrieval, d.h. es wird untersucht, in wie weit Methoden der Informationssuche sich auf modelliertes Wissen übertragen lassen. Die kennzeichnenden Merkmale von IR-Systemen wie vage Anfragen sowie die Unterstützung unsicheren Wissens werden im Kontext des Semantic Web behandelt. Im Fokus steht die Suche nach Fakten innerhalb einer Wissensdomäne, die entweder explizit modelliert sind oder implizit durch die Anwendung von Inferenz abgeleitet werden können. Aufbauend auf der an der Universität Duisburg-Essen entwickelten Retrievalmaschine PIRE wird die Anwendung unsicherer Inferenz mit probabilistischer Prädikatenlogik (pDatalog) implementiert.
    Theme
    Semantic Web
  7. Singh, A.; Sinha, U.; Sharma, D.k.: Semantic Web and data visualization (2020) 0.04
    0.04241117 = product of:
      0.1272335 = sum of:
        0.03853567 = weight(_text_:wide in 79) [ClassicSimilarity], result of:
          0.03853567 = score(doc=79,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.1958137 = fieldWeight in 79, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=79)
        0.08869784 = weight(_text_:web in 79) [ClassicSimilarity], result of:
          0.08869784 = score(doc=79,freq=36.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.6119082 = fieldWeight in 79, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=79)
      0.33333334 = coord(2/6)
    
    Abstract
    With the terrific growth of data volume and data being produced every second on millions of devices across the globe, there is a desperate need to manage the unstructured data available on web pages efficiently. Semantic Web or also known as Web of Trust structures the scattered data on the Internet according to the needs of the user. It is an extension of the World Wide Web (WWW) which focuses on manipulating web data on behalf of Humans. Due to the ability of the Semantic Web to integrate data from disparate sources and hence makes it more user-friendly, it is an emerging trend. Tim Berners-Lee first introduced the term Semantic Web and since then it has come a long way to become a more intelligent and intuitive web. Data Visualization plays an essential role in explaining complex concepts in a universal manner through pictorial representation, and the Semantic Web helps in broadening the potential of Data Visualization and thus making it an appropriate combination. The objective of this chapter is to provide fundamental insights concerning the semantic web technologies and in addition to that it also elucidates the issues as well as the solutions regarding the semantic web. The purpose of this chapter is to highlight the semantic web architecture in detail while also comparing it with the traditional search system. It classifies the semantic web architecture into three major pillars i.e. RDF, Ontology, and XML. Moreover, it describes different semantic web tools used in the framework and technology. It attempts to illustrate different approaches of the semantic web search engines. Besides stating numerous challenges faced by the semantic web it also illustrates the solutions.
    Theme
    Semantic Web
  8. Li, Z.: ¬A domain specific search engine with explicit document relations (2013) 0.04
    0.042189382 = product of:
      0.12656814 = sum of:
        0.04816959 = weight(_text_:wide in 1210) [ClassicSimilarity], result of:
          0.04816959 = score(doc=1210,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.24476713 = fieldWeight in 1210, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1210)
        0.078398556 = weight(_text_:web in 1210) [ClassicSimilarity], result of:
          0.078398556 = score(doc=1210,freq=18.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.5408555 = fieldWeight in 1210, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1210)
      0.33333334 = coord(2/6)
    
    Abstract
    The current web consists of documents that are highly heterogeneous and hard for machines to understand. The Semantic Web is a progressive movement of the Word Wide Web, aiming at converting the current web of unstructured documents to the web of data. In the Semantic Web, web documents are annotated with metadata using standardized ontology language. These annotated documents are directly processable by machines and it highly improves their usability and usefulness. In Ericsson, similar problems occur. There are massive documents being created with well-defined structures. Though these documents are about domain specific knowledge and can have rich relations, they are currently managed by a traditional search engine, which ignores the rich domain specific information and presents few data to users. Motivated by the Semantic Web, we aim to find standard ways to process these documents, extract rich domain specific information and annotate these data to documents with formal markup languages. We propose this project to develop a domain specific search engine for processing different documents and building explicit relations for them. This research project consists of the three main focuses: examining different domain specific documents and finding ways to extract their metadata; integrating a text search engine with an ontology server; exploring novel ways to build relations for documents. We implement this system and demonstrate its functions. As a prototype, the system provides required features and will be extended in the future.
    Theme
    Semantic Web
  9. Martínez-González, M.M.; Alvite-Díez, M.L.: Thesauri and Semantic Web : discussion of the evolution of thesauri toward their integration with the Semantic Web (2019) 0.04
    0.042189382 = product of:
      0.12656814 = sum of:
        0.04816959 = weight(_text_:wide in 5997) [ClassicSimilarity], result of:
          0.04816959 = score(doc=5997,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.24476713 = fieldWeight in 5997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5997)
        0.078398556 = weight(_text_:web in 5997) [ClassicSimilarity], result of:
          0.078398556 = score(doc=5997,freq=18.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.5408555 = fieldWeight in 5997, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5997)
      0.33333334 = coord(2/6)
    
    Abstract
    Thesauri are Knowledge Organization Systems (KOS), that arise from the consensus of wide communities. They have been in use for many years and are regularly updated. Whereas in the past thesauri were designed for information professionals for indexing and searching, today there is a demand for conceptual vocabularies that enable inferencing by machines. The development of the Semantic Web has brought a new opportunity for thesauri, but thesauri also face the challenge of proving that they add value to it. The evolution of thesauri toward their integration with the Semantic Web is examined. Elements and structures in the thesaurus standard, ISO 25964, and SKOS (Simple Knowledge Organization System), the Semantic Web standard for representing KOS, are reviewed and compared. Moreover, the integrity rules of thesauri are contrasted with the axioms of SKOS. How SKOS has been applied to represent some real thesauri is taken into account. Three thesauri are chosen for this aim: AGROVOC, EuroVoc and the UNESCO Thesaurus. Based on the results of this comparison and analysis, the benefits that Semantic Web technologies offer to thesauri, how thesauri can contribute to the Semantic Web, and the challenges that would help to improve their integration with the Semantic Web are discussed.
    Theme
    Semantic Web
  10. Klic, L.; Miller, M.; Nelson, J.K.; Germann, J.E.: Approaching the largest 'API' : extracting information from the Internet with Python (2018) 0.04
    0.04017412 = product of:
      0.12052235 = sum of:
        0.057803504 = weight(_text_:wide in 4239) [ClassicSimilarity], result of:
          0.057803504 = score(doc=4239,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.29372054 = fieldWeight in 4239, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=4239)
        0.062718846 = weight(_text_:web in 4239) [ClassicSimilarity], result of:
          0.062718846 = score(doc=4239,freq=8.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.43268442 = fieldWeight in 4239, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4239)
      0.33333334 = coord(2/6)
    
    Abstract
    This article explores the need for libraries to algorithmically access and manipulate the world's largest API: the Internet. The billions of pages on the 'Internet API' (HTTP, HTML, CSS, XPath, DOM, etc.) are easily accessible and manipulable. Libraries can assist in creating meaning through the datafication of information on the world wide web. Because most information is created for human consumption, some programming is required for automated extraction. Python is an easy-to-learn programming language with extensive packages and community support for web page automation. Four packages (Urllib, Selenium, BeautifulSoup, Scrapy) in Python can automate almost any web page for all sized projects. An example warrant data project is explained to illustrate how well Python packages can manipulate web pages to create meaning through assembling custom datasets.
  11. Zhang, L.; Liu, Q.L.; Zhang, J.; Wang, H.F.; Pan, Y.; Yu, Y.: Semplore: an IR approach to scalable hybrid query of Semantic Web data (2007) 0.04
    0.039814256 = product of:
      0.11944277 = sum of:
        0.08667288 = weight(_text_:web in 231) [ClassicSimilarity], result of:
          0.08667288 = score(doc=231,freq=22.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.59793836 = fieldWeight in 231, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=231)
        0.03276989 = weight(_text_:computer in 231) [ClassicSimilarity], result of:
          0.03276989 = score(doc=231,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.20188503 = fieldWeight in 231, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=231)
      0.33333334 = coord(2/6)
    
    Abstract
    As an extension to the current Web, Semantic Web will not only contain structured data with machine understandable semantics but also textual information. While structured queries can be used to find information more precisely on the Semantic Web, keyword searches are still needed to help exploit textual information. It thus becomes very important that we can combine precise structured queries with imprecise keyword searches to have a hybrid query capability. In addition, due to the huge volume of information on the Semantic Web, the hybrid query must be processed in a very scalable way. In this paper, we define such a hybrid query capability that combines unary tree-shaped structured queries with keyword searches. We show how existing information retrieval (IR) index structures and functions can be reused to index semantic web data and its textual information, and how the hybrid query is evaluated on the index structure using IR engines in an efficient and scalable manner. We implemented this IR approach in an engine called Semplore. Comprehensive experiments on its performance show that it is a promising approach. It leads us to believe that it may be possible to evolve current web search engines to query and search the Semantic Web. Finally, we briefy describe how Semplore is used for searching Wikipedia and an IBM customer's product information.
    Series
    Lecture notes in computer science; 4825
    Source
    Proceeding ISWC'07/ASWC'07 : Proceedings of the 6th international The semantic web and 2nd Asian conference on Asian semantic web conference. Ed.: K. Aberer et al
    Theme
    Semantic Web
  12. Powell, J.; Fox, E.A.: Multilingual federated searching across heterogeneous collections (1998) 0.04
    0.03962797 = product of:
      0.11888391 = sum of:
        0.07707134 = weight(_text_:wide in 1250) [ClassicSimilarity], result of:
          0.07707134 = score(doc=1250,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.3916274 = fieldWeight in 1250, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0625 = fieldNorm(doc=1250)
        0.041812565 = weight(_text_:web in 1250) [ClassicSimilarity], result of:
          0.041812565 = score(doc=1250,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.2884563 = fieldWeight in 1250, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=1250)
      0.33333334 = coord(2/6)
    
    Abstract
    This article describes a scalable system for searching heterogeneous multilingual collections on the World Wide Web. It details a markup language for describing the characteristics of a search engine and its interface, and a protocol for requesting word translations between languages.
  13. Dambeck, H.: Wie Google mit Milliarden Unbekannten rechnet : Teil.1 (2009) 0.04
    0.03962797 = product of:
      0.11888391 = sum of:
        0.07707134 = weight(_text_:wide in 3081) [ClassicSimilarity], result of:
          0.07707134 = score(doc=3081,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.3916274 = fieldWeight in 3081, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0625 = fieldNorm(doc=3081)
        0.041812565 = weight(_text_:web in 3081) [ClassicSimilarity], result of:
          0.041812565 = score(doc=3081,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.2884563 = fieldWeight in 3081, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=3081)
      0.33333334 = coord(2/6)
    
    Abstract
    Ein Leben ohne Suchmaschinen? Für alle, die viel im World Wide Web unterwegs sind, eine geradezu absurde Vorstellung. Bei der Berechnung der Trefferlisten nutzt Google ein erstaunlich simples mathematisches Verfahren, das sogar Milliarden von Internetseiten in den Griff bekommt.
  14. Thesaurus software (2001) 0.04
    0.039404403 = product of:
      0.11821321 = sum of:
        0.036585998 = weight(_text_:web in 6773) [ClassicSimilarity], result of:
          0.036585998 = score(doc=6773,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.25239927 = fieldWeight in 6773, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6773)
        0.081627205 = product of:
          0.16325441 = sum of:
            0.16325441 = weight(_text_:programs in 6773) [ClassicSimilarity], result of:
              0.16325441 = score(doc=6773,freq=4.0), product of:
                0.25748047 = queryWeight, product of:
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.044416238 = queryNorm
                0.6340458 = fieldWeight in 6773, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6773)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Members offer comments and suggest resources on programs for creating, maintaining, and publishing thesauri. Formerly a tool for writers and indexers, the thesaurus has taken on a new role as an essential component of the corporate information infrastructure. Many people are using word processor or database programs to create and maintain thesauri, while others are using specialized tools that perform consistency checks and offer special reporting capabilities. Some also use thesaurus modules integrated into another application, such as web publishing, content management, or e-commerce. This article includes material comes from our own experience, email responses from members, and comments from participants in our seminars and roundtables. There's also an introduction to thesauri in a corporate information management system
  15. Subramanian, S.; Shafer, K.E.: Clustering (1998) 0.04
    0.039268494 = product of:
      0.11780548 = sum of:
        0.052265707 = weight(_text_:web in 1103) [ClassicSimilarity], result of:
          0.052265707 = score(doc=1103,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.36057037 = fieldWeight in 1103, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.078125 = fieldNorm(doc=1103)
        0.06553978 = weight(_text_:computer in 1103) [ClassicSimilarity], result of:
          0.06553978 = score(doc=1103,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.40377006 = fieldWeight in 1103, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.078125 = fieldNorm(doc=1103)
      0.33333334 = coord(2/6)
    
    Abstract
    This article presents our exploration of computer science clustering algorithms as they relate to the Scorpion system. Scorpion is a research project at OCLC that explores the indexing and cataloging of electronic resources. For a more complete description of the Scorpion, please visit the Scorpion Web site at <http://purl.oclc.org/scorpion>
  16. Eckert, K.: SKOS: eine Sprache für die Übertragung von Thesauri ins Semantic Web (2011) 0.04
    0.039188966 = product of:
      0.11756689 = sum of:
        0.09349574 = weight(_text_:web in 4331) [ClassicSimilarity], result of:
          0.09349574 = score(doc=4331,freq=10.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.6450079 = fieldWeight in 4331, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=4331)
        0.024071148 = product of:
          0.048142295 = sum of:
            0.048142295 = weight(_text_:22 in 4331) [ClassicSimilarity], result of:
              0.048142295 = score(doc=4331,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.30952093 = fieldWeight in 4331, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4331)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Das Semantic Web - bzw. Linked Data - hat das Potenzial, die Verfügbarkeit von Daten und Wissen, sowie den Zugriff darauf zu revolutionieren. Einen großen Beitrag dazu können Wissensorganisationssysteme wie Thesauri leisten, die die Daten inhaltlich erschließen und strukturieren. Leider sind immer noch viele dieser Systeme lediglich in Buchform oder in speziellen Anwendungen verfügbar. Wie also lassen sie sich für das Semantic Web nutzen? Das Simple Knowledge Organization System (SKOS) bietet eine Möglichkeit, die Wissensorganisationssysteme in eine Form zu "übersetzen", die im Web zitiert und mit anderen Resourcen verknüpft werden kann.
    Date
    15. 3.2011 19:21:22
    Theme
    Semantic Web
  17. OWL Web Ontology Language Test Cases (2004) 0.04
    0.039188966 = product of:
      0.11756689 = sum of:
        0.09349574 = weight(_text_:web in 4685) [ClassicSimilarity], result of:
          0.09349574 = score(doc=4685,freq=10.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.6450079 = fieldWeight in 4685, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=4685)
        0.024071148 = product of:
          0.048142295 = sum of:
            0.048142295 = weight(_text_:22 in 4685) [ClassicSimilarity], result of:
              0.048142295 = score(doc=4685,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.30952093 = fieldWeight in 4685, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4685)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    This document contains and presents test cases for the Web Ontology Language (OWL) approved by the Web Ontology Working Group. Many of the test cases illustrate the correct usage of the Web Ontology Language (OWL), and the formal meaning of its constructs. Other test cases illustrate the resolution of issues considered by the Working Group. Conformance for OWL documents and OWL document checkers is specified.
    Date
    14. 8.2011 13:33:22
    Theme
    Semantic Web
  18. Hafner, R.; Schelling, B.: Automatisierung der Sacherschließung mit Semantic Web Technologie (2015) 0.04
    0.03843217 = product of:
      0.115296505 = sum of:
        0.073171996 = weight(_text_:web in 8365) [ClassicSimilarity], result of:
          0.073171996 = score(doc=8365,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.50479853 = fieldWeight in 8365, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.109375 = fieldNorm(doc=8365)
        0.04212451 = product of:
          0.08424902 = sum of:
            0.08424902 = weight(_text_:22 in 8365) [ClassicSimilarity], result of:
              0.08424902 = score(doc=8365,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.5416616 = fieldWeight in 8365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=8365)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Date
    22. 6.2015 16:08:38
  19. Dextre Clarke, S.G.: Challenges and opportunities for KOS standards (2007) 0.04
    0.03843217 = product of:
      0.115296505 = sum of:
        0.073171996 = weight(_text_:web in 4643) [ClassicSimilarity], result of:
          0.073171996 = score(doc=4643,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.50479853 = fieldWeight in 4643, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.109375 = fieldNorm(doc=4643)
        0.04212451 = product of:
          0.08424902 = sum of:
            0.08424902 = weight(_text_:22 in 4643) [ClassicSimilarity], result of:
              0.08424902 = score(doc=4643,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.5416616 = fieldWeight in 4643, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4643)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Date
    22. 9.2007 15:41:14
    Theme
    Semantic Web
  20. Pott, O.; Wielage, G.: XML Praxis und Referenz (2000) 0.04
    0.037795175 = product of:
      0.11338552 = sum of:
        0.06812209 = weight(_text_:wide in 6985) [ClassicSimilarity], result of:
          0.06812209 = score(doc=6985,freq=4.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.34615302 = fieldWeight in 6985, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6985)
        0.045263432 = weight(_text_:web in 6985) [ClassicSimilarity], result of:
          0.045263432 = score(doc=6985,freq=6.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.3122631 = fieldWeight in 6985, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6985)
      0.33333334 = coord(2/6)
    
    Abstract
    Mit wohl einem der faszinierendsten und innovativsten Themen der Gegenwart und allernächsten Zukunft des Internet befasst sich dieses Buch: XML. Nie als HTML-Ersatz gedacht, erweitert es das Spektrum möglicher Anwendungen im Internet einerseits und schließt andererseits klaffende Lücken und technische Unzulänglichkeiten. Keine Frage: Wer sich als Web-Administrator, Autor eines privaten oder geschäftlichen Internet-Auftritts, Intranet-Verantwortlicher oder -Anwender mit HTML auseinandergesetzt hat, wird in Zukunft auch um XML nicht umhinkommen. Auch außerhalb der Online-Szene hat sich XML bereits heute als richtungsweisender Standard des Dokumentenmanagements etabliert. Dieses Buch bietet das komplette XML- und XSL-Wissen auf praxisnahem und hohem Niveau. Neben einer fundierten Einführung finden Sie das komplette Know-how, stets belegt und beschrieben durch Praxisanwendungen, das Sie für die Arbeit mit XML benötigen. Mit viel Engagement und Zeitaufwand haben uns Firmen, Freunde, Mitarbeiter und der Markt & Technik-Verlag unterstützt. Unser Dank gilt daher all jenen, die ihren Anteil am Gelingen dieses Buches hatten und noch haben werden. In der zweiten völlig aktualisierten und stark erweiterten Ausgabe dieses Buches konnten wir zahlreiche positive Rückmeldungen von Leserinnen und Lesern berücksichtigen. So greift dieses Buch jetzt auch neueste Entwicklungen aus der XML-Entwicklung auf. Dazu gehören beispielsweise SMIL und WML (WAP) oder die erst im Dezember 1999 veröffentlichte X-HTML Empfehlung.
    RSWK
    World wide web / Seite / Gestaltung (213)
    Subject
    World wide web / Seite / Gestaltung (213)

Years

Languages

  • e 457
  • d 193
  • a 5
  • el 2
  • f 2
  • i 2
  • nl 1
  • More… Less…

Types

  • a 287
  • i 20
  • r 15
  • n 14
  • x 13
  • s 12
  • m 11
  • p 3
  • b 2
  • More… Less…

Themes