Search (82 results, page 1 of 5)

  • × theme_ss:"Semantic Web"
  1. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.14
    0.13889162 = product of:
      0.27778324 = sum of:
        0.03968332 = product of:
          0.11904996 = sum of:
            0.11904996 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.11904996 = score(doc=701,freq=2.0), product of:
                0.3177388 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03747799 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.11904996 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.11904996 = score(doc=701,freq=2.0), product of:
            0.3177388 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03747799 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.11904996 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.11904996 = score(doc=701,freq=2.0), product of:
            0.3177388 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03747799 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.5 = coord(3/6)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  2. Mayr, P.; Mutschke, P.; Petras, V.: Reducing semantic complexity in distributed digital libraries : Treatment of term vagueness and document re-ranking (2008) 0.02
    0.024729438 = product of:
      0.14837663 = sum of:
        0.14837663 = weight(_text_:ranking in 1909) [ClassicSimilarity], result of:
          0.14837663 = score(doc=1909,freq=12.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.7319307 = fieldWeight in 1909, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1909)
      0.16666667 = coord(1/6)
    
    Abstract
    Purpose - The general science portal "vascoda" merges structured, high-quality information collections from more than 40 providers on the basis of search engine technology (FAST) and a concept which treats semantic heterogeneity between different controlled vocabularies. First experiences with the portal show some weaknesses of this approach which come out in most metadata-driven Digital Libraries (DLs) or subject specific portals. The purpose of the paper is to propose models to reduce the semantic complexity in heterogeneous DLs. The aim is to introduce value-added services (treatment of term vagueness and document re-ranking) that gain a certain quality in DLs if they are combined with heterogeneity components established in the project "Competence Center Modeling and Treatment of Semantic Heterogeneity". Design/methodology/approach - Two methods, which are derived from scientometrics and network analysis, will be implemented with the objective to re-rank result sets by the following structural properties: the ranking of the results by core journals (so-called Bradfordizing) and ranking by centrality of authors in co-authorship networks. Findings - The methods, which will be implemented, focus on the query and on the result side of a search and are designed to positively influence each other. Conceptually, they will improve the search quality and guarantee that the most relevant documents in result sets will be ranked higher. Originality/value - The central impact of the paper focuses on the integration of three structural value-adding methods, which aim at reducing the semantic complexity represented in distributed DLs at several stages in the information retrieval process: query construction, search and ranking and re-ranking.
  3. Yahoo kündigt semantische Suche an (2008) 0.02
    0.018721774 = product of:
      0.11233064 = sum of:
        0.11233064 = weight(_text_:suchmaschine in 1840) [ClassicSimilarity], result of:
          0.11233064 = score(doc=1840,freq=4.0), product of:
            0.21191008 = queryWeight, product of:
              5.6542544 = idf(docFreq=420, maxDocs=44218)
              0.03747799 = queryNorm
            0.53008634 = fieldWeight in 1840, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.6542544 = idf(docFreq=420, maxDocs=44218)
              0.046875 = fieldNorm(doc=1840)
      0.16666667 = coord(1/6)
    
    Content
    "Yahoo hat angekündigt einige der wesentlichen Standards des semantischen Webs in seine Suchmaschine zu integrieren. Das Unternehmen will einige semantische Web-Indexate in seine Suchtechnik einbauen. Anstatt eine lange Liste von Links auszuspucken, könnte eine semantische Suchmaschine verstehen, welche Art von Objekt oder Person gesucht wird und zusätzliche Information anbieten. Die neue Technologie könnte etablierte Angebote unter Druck setzen, erwarten Experten. Google setzt nach wie vor auf konventionelle Technologie. Der Schachzug des Unternehmens könnte der Verbreitung der Technologie erheblichen Auftrieb geben. Trotz des bemerkenswerten Fortschritts des semantischen Webs der vergangenen Jahre habe der durchschnittliche User davon noch nichts bemerkt, meint Amit Kumar, Product Management Director bei Yahoo. Yahoo habe nun gemerkt, dass sich das langsam ändere. Wie in den Anfangstagen des Web würden viele Menschen Daten mit Kennzeichnungen und Indextermen versehen, die semantische Suchmaschinen brauchen, um das Web zu durchsuchen. Yahoo hat erkannt, dass es nun genug Informationen als Grundlage für eine semantische Websuche gibt."
  4. Wagner, S.: Barrierefreie und thesaurusbasierte Suchfunktion für das Webportal der Stadt Nürnberg (2007) 0.02
    0.015444675 = product of:
      0.09266805 = sum of:
        0.09266805 = weight(_text_:suchmaschine in 1724) [ClassicSimilarity], result of:
          0.09266805 = score(doc=1724,freq=2.0), product of:
            0.21191008 = queryWeight, product of:
              5.6542544 = idf(docFreq=420, maxDocs=44218)
              0.03747799 = queryNorm
            0.43729892 = fieldWeight in 1724, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6542544 = idf(docFreq=420, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1724)
      0.16666667 = coord(1/6)
    
    Abstract
    Im Internetportal der Stadt Nürnberg wurde in einer vorausgehenden Diplomarbeit eine Suchmaschine auf Basis des Produktes e:IAS der Fa. empolis GmbH realisiert. Diese Lösung soll in verschiedenen Bereichen verbessert und erweitert werden. Es sollen aussagekräftige Logfiles generiert und ausgewertet werden, insbesondere sollen die Auswertungen mit denen der vorhergehenden Suchlösung vergleichbar sein. Bei der Ergebnispräsentation sollen die Erfordernisse der Barrierefreiheit beachtet werden und die vorhandenen Templates entsprechende Anpassung erfahren. Die Lösung soll um Ansätze semantischer Suche erweitert werden. Es ist angedacht die vorhandene Synonymverwendung auszubauen und um Taxonomien zu einem Theasurus zu erweitern. Dabei sollen verschiedene Möglichkeiten untersucht werden und eine Möglichkeit, mindestens prototypisch, integriert werden.
  5. Ning, X.; Jin, H.; Wu, H.: RSS: a framework enabling ranked search on the semantic web (2008) 0.01
    0.014277548 = product of:
      0.085665286 = sum of:
        0.085665286 = weight(_text_:ranking in 2069) [ClassicSimilarity], result of:
          0.085665286 = score(doc=2069,freq=4.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.42258036 = fieldWeight in 2069, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2069)
      0.16666667 = coord(1/6)
    
    Abstract
    The semantic web not only contains resources but also includes the heterogeneous relationships among them, which is sharply distinguished from the current web. As the growth of the semantic web, specialized search techniques are of significance. In this paper, we present RSS-a framework for enabling ranked semantic search on the semantic web. In this framework, the heterogeneity of relationships is fully exploited to determine the global importance of resources. In addition, the search results can be greatly expanded with entities most semantically related to the query, thus able to provide users with properly ordered semantic search results by combining global ranking values and the relevance between the resources and the query. The proposed semantic search model which supports inference is very different from traditional keyword-based search methods. Moreover, RSS also distinguishes from many current methods of accessing the semantic web data in that it applies novel ranking strategies to prevent returning search results in disorder. The experimental results show that the framework is feasible and can produce better ordering of semantic search results than directly applying the standard PageRank algorithm on the semantic web.
  6. Tochtermann, K.; Maurer, H.: Semantic Web : Geschichte und Ausblick einer Vision (2006) 0.01
    0.012481183 = product of:
      0.0748871 = sum of:
        0.0748871 = weight(_text_:suchmaschine in 5713) [ClassicSimilarity], result of:
          0.0748871 = score(doc=5713,freq=4.0), product of:
            0.21191008 = queryWeight, product of:
              5.6542544 = idf(docFreq=420, maxDocs=44218)
              0.03747799 = queryNorm
            0.3533909 = fieldWeight in 5713, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.6542544 = idf(docFreq=420, maxDocs=44218)
              0.03125 = fieldNorm(doc=5713)
      0.16666667 = coord(1/6)
    
    Content
    "Stellen Sie sich folgendes Szenario vor. Frau Maler ist auf der Suche nach einem homöopathischen Arzt in ihrer Heimatstadt. Über ihre bevorzugte Suchmaschine gibt Frau Maler die Suchbegriffe Arzt, Homöopathie und Stadt Graz ein. Als Ergebnis wirft die Suchmaschine eine lange Liste an Links aus. Frau Maler sucht sich die Links aus, die direkt auf Ordinationen zeigen. Natürlich sind nun auch Ärzte in ihrer Liste, die sich nicht mit Homöopathie beschäftigen. Diese sortiert Frau Maler aus, genauso wie jene, die nicht mit den öffentlichen Verkehrsmitteln des örtlichen Verbunds erreichbar sind. Dazu muss sie natürlich zum Teil zuvor im Stadtplan nach der genauen Lage der Adressen suchen. Schließlich sortiert sie die Ärzte aus, die keine Bewertungen besitzen bzw. die nicht zumindest als gut bewertet wurden. Für die verbleibenden Ärzte sichtet Frau Maler noch die Ordinationszeiten und vergleicht diese mit ihrem Kalender. Nach 20 Minuten hat Frau Maler schließlich drei in Frage kommende Ärzte gefunden. Ob sie welche übersehen hat, weiß sie freilich nicht." Geht das ganze nicht einfacher, schneller, besser? Und jetzt versetzen Sie sich ein paar Jahre in die Zukunft und stellen sich einen Projektmanager in einem transnationalen Konzern vor, der zusammen mit zwei weiteren Großkonzernen, fünfzehn kleineren Firmen sowie einem breiten Netzwerk selbständiger Spezialisten an einem bahnbrechenden Weltraumtourismusprojekt arbeitet. Ein unerwartetes Gerichtsurteil hat soeben die gesamten Rahmenbedingungen des Projekts gehörig ins Wanken gebracht. Und in der real time econony erwarten seine Projektpartner, Investoren und nicht zuletzt sein Chef schnelle und sichere Antworten - und das erfordert nicht nur Recherchen, die die Arztsuche von Frau Maler wie ein Kinderspiel aussehen lassen, sondern darüber hinaus auch noch enge virtuelle Zusammenarbeit über mehrere Zeitzonen und Fachdisziplinen hinweg ... muss das ganze nicht einfacher, schneller, besser gehen als heute? Die Antwort von Tim Berners-Lee, dem "Erfinder" des World Wide Web, und einer recht großen und beständig wachsenden Gruppe von Forschern, Technikern und zunehmend auch Anwendern lautet: Ja. Und wir wissen auch, wie. An dieser Stelle wird dann der Begriff des Semantic Web ins Spiel gebracht. Die grundlegende Idee besteht darin, Inhalte im Web so anzureichern, dass sie nicht nur für Menschen verständlich sind, sondern auch von Maschinen zumindest soweit erfasst werden können, dass Automatisierung auch auf der Ebene der Bedeutung möglich wird. Wie und wodurch das im Einzelnen geschieht, ist Gegenstand der Beiträge in diesem Band, die den Bogen von den prägenden Rahmenbedingungen - den Arbeitswelten in der Wissensgesellschaft - bis hin zu zukünftigen intelligenten Diensten - den Semantic Web Services - spannen.
  7. Wang, H.; Liu, Q.; Penin, T.; Fu, L.; Zhang, L.; Tran, T.; Yu, Y.; Pan, Y.: Semplore: a scalable IR approach to search the Web of Data (2009) 0.01
    0.0121149 = product of:
      0.0726894 = sum of:
        0.0726894 = weight(_text_:ranking in 1638) [ClassicSimilarity], result of:
          0.0726894 = score(doc=1638,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.35857132 = fieldWeight in 1638, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=1638)
      0.16666667 = coord(1/6)
    
    Abstract
    The Web of Data keeps growing rapidly. However, the full exploitation of this large amount of structured data faces numerous challenges like usability, scalability, imprecise information needs and data change. We present Semplore, an IR-based system that aims at addressing these issues. Semplore supports intuitive faceted search and complex queries both on text and structured data. It combines imprecise keyword search and precise structured query in a unified ranking scheme. Scalable query processing is supported by leveraging inverted indexes traditionally used in IR systems. This is combined with a novel block-based index structure to support efficient index update when data changes. The experimental results show that Semplore is an efficient and effective system for searching the Web of Data and can be used as a basic infrastructure for Web-scale Semantic Web search engines.
  8. Sah, M.; Wade, V.: Personalized concept-based search on the Linked Open Data (2015) 0.01
    0.011422038 = product of:
      0.06853223 = sum of:
        0.06853223 = weight(_text_:ranking in 2511) [ClassicSimilarity], result of:
          0.06853223 = score(doc=2511,freq=4.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.33806428 = fieldWeight in 2511, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03125 = fieldNorm(doc=2511)
      0.16666667 = coord(1/6)
    
    Abstract
    In this paper, we present a novel personalized concept-based search mechanism for the Web of Data based on results categorization. The innovation of the paper comes from combining novel categorization and personalization techniques, and using categorization for providing personalization. In our approach, search results (Linked Open Data resources) are dynamically categorized into Upper Mapping and Binding Exchange Layer (UMBEL) concepts using a novel fuzzy retrieval model. Then, results with the same concepts are grouped together to form categories, which we call conceptlenses. Such categorization enables concept-based browsing of the retrieved results aligned to users' intent or interests. When the user selects a concept lens for exploration, results are immediately personalized. In particular, all concept lenses are personally re-organized according to their similarity to the selected lens. Within the selected concept lens; more relevant results are included using results re-ranking and query expansion, as well as relevant concept lenses are suggested to support results exploration. This allows dynamic adaptation of results to the user's local choices. We also support interactive personalization; when the user clicks on a result, within the interacted lens, relevant lenses and results are included using results re-ranking and query expansion. Extensive evaluations were performed to assess our approach: (i) Performance of our fuzzy-based categorization approach was evaluated on a particular benchmark (~10,000 mappings). The evaluations showed that we can achieve highly acceptable categorization accuracy and perform better than the vector space model. (ii) Personalized search efficacy was assessed using a user study with 32 participants in a tourist domain. The results revealed that our approach performed significantly better than a non-adaptive baseline search. (iii) Dynamic personalization performance was evaluated, which illustrated that our personalization approach is scalable. (iv) Finally, we compared our system with the existing LOD search engines, which showed that our approach is unique.
  9. Krause, J.: Shell Model, Semantic Web and Web Information Retrieval (2006) 0.01
    0.010095751 = product of:
      0.0605745 = sum of:
        0.0605745 = weight(_text_:ranking in 6061) [ClassicSimilarity], result of:
          0.0605745 = score(doc=6061,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.29880944 = fieldWeight in 6061, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6061)
      0.16666667 = coord(1/6)
    
    Abstract
    The middle of the 1990s are coined by the increased enthusiasm for the possibilities of the WWW, which has only recently deviated - at least in relation to scientific information - for the differentiated measuring of its advantages and disadvantages. Web Information Retrieval originated as a specialized discipline with great commercial significance (for an overview see Lewandowski 2005). Besides the new technological structure that enables the indexing and searching (in seconds) of unimaginable amounts of data worldwide, new assessment processes for the ranking of search results are being developed, which use the link structures of the Web. They are the main innovation with respect to the traditional "mother discipline" of Information Retrieval. From the beginning, link structures of Web pages are applied to commercial search engines in a wide array of variations. From the perspective of scientific information, link topology based approaches were in essence trying to solve a self-created problem: on the one hand, it quickly became clear that the openness of the Web led to an up-tonow unknown increase in available information, but this also caused the quality of the Web pages searched to become a problem - and with it the relevance of the results. The gatekeeper function of traditional information providers, which narrows down every user query to focus on high-quality sources was lacking. Therefore, the recognition of the "authoritativeness" of the Web pages by general search engines such as Google was one of the most important factors for their success.
  10. Zenz, G.; Zhou, X.; Minack, E.; Siberski, W.; Nejdl, W.: Interactive query construction for keyword search on the Semantic Web (2012) 0.01
    0.010095751 = product of:
      0.0605745 = sum of:
        0.0605745 = weight(_text_:ranking in 430) [ClassicSimilarity], result of:
          0.0605745 = score(doc=430,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.29880944 = fieldWeight in 430, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=430)
      0.16666667 = coord(1/6)
    
    Abstract
    With the advance of the semantic Web, increasing amounts of data are available in a structured and machine-understandable form. This opens opportunities for users to employ semantic queries instead of simple keyword-based ones to accurately express the information need. However, constructing semantic queries is a demanding task for human users [11]. To compose a valid semantic query, a user has to (1) master a query language (e.g., SPARQL) and (2) acquire sufficient knowledge about the ontology or the schema of the data source. While there are systems which support this task with visual tools [21, 26] or natural language interfaces [3, 13, 14, 18], the process of query construction can still be complex and time consuming. According to [24], users prefer keyword search, and struggle with the construction of semantic queries although being supported with a natural language interface. Several keyword search approaches have already been proposed to ease information seeking on semantic data [16, 32, 35] or databases [1, 31]. However, keyword queries lack the expressivity to precisely describe the user's intent. As a result, ranking can at best put query intentions of the majority on top, making it impossible to take the intentions of all users into consideration.
  11. Oliveira Machado, L.M.; Souza, R.R.; Simões, M. da Graça: Semantic web or web of data? : a diachronic study (1999 to 2017) of the publications of Tim Berners-Lee and the World Wide Web Consortium (2019) 0.01
    0.010095751 = product of:
      0.0605745 = sum of:
        0.0605745 = weight(_text_:ranking in 5300) [ClassicSimilarity], result of:
          0.0605745 = score(doc=5300,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.29880944 = fieldWeight in 5300, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5300)
      0.16666667 = coord(1/6)
    
    Abstract
    The web has been, in the last decades, the place where information retrieval achieved its maximum importance, given its ubiquity and the sheer volume of information. However, its exponential growth made the retrieval task increasingly hard, relying in its effectiveness on idiosyncratic and somewhat biased ranking algorithms. To deal with this problem, a "new" web, called the Semantic Web (SW), was proposed, bringing along concepts like "Web of Data" and "Linked Data," although the definitions and connections among these concepts are often unclear. Based on a qualitative approach built over a literature review, a definition of SW is presented, discussing the related concepts sometimes used as synonyms. It concludes that the SW is a comprehensive and ambitious construct that includes the great purpose of making the web a global database. It also follows the specifications developed and/or associated with its operationalization and the necessary procedures for the connection of data in an open format on the web. The goals of this comprehensive SW are the union of two outcomes still tenuously connected: the virtually unlimited possibility of connections between data-the web domain-with the potentiality of the automated inference of "intelligent" systems-the semantic component.
  12. Radhakrishnan, A.: Swoogle : an engine for the Semantic Web (2007) 0.01
    0.008076601 = product of:
      0.0484596 = sum of:
        0.0484596 = weight(_text_:ranking in 4709) [ClassicSimilarity], result of:
          0.0484596 = score(doc=4709,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.23904754 = fieldWeight in 4709, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03125 = fieldNorm(doc=4709)
      0.16666667 = coord(1/6)
    
    Content
    "Swoogle, the Semantic web search engine, is a research project carried out by the ebiquity research group in the Computer Science and Electrical Engineering Department at the University of Maryland. It's an engine tailored towards finding documents on the semantic web. The whole research paper is available here. Semantic web is touted as the next generation of online content representation where the web documents are represented in a language that is not only easy for humans but is machine readable (easing the integration of data as never thought possible) as well. And the main elements of the semantic web include data model description formats such as Resource Description Framework (RDF), a variety of data interchange formats (e.g. RDF/XML, Turtle, N-Triples), and notations such as RDF Schema (RDFS), the Web Ontology Language (OWL), all of which are intended to provide a formal description of concepts, terms, and relationships within a given knowledge domain (Wikipedia). And Swoogle is an attempt to mine and index this new set of web documents. The engine performs crawling of semantic documents like most web search engines and the search is available as web service too. The engine is primarily written in Java with the PHP used for the front-end and MySQL for database. Swoogle is capable of searching over 10,000 ontologies and indexes more that 1.3 million web documents. It also computes the importance of a Semantic Web document. The techniques used for indexing are the more google-type page ranking and also mining the documents for inter-relationships that are the basis for the semantic web. For more information on how the RDF framework can be used to relate documents, read the link here. Being a research project, and with a non-commercial motive, there is not much hype around Swoogle. However, the approach to indexing of Semantic web documents is an approach that most engines will have to take at some point of time. When the Internet debuted, there were no specific engines available for indexing or searching. The Search domain only picked up as more and more content became available. One fundamental question that I've always wondered about it is - provided that the search engines return very relevant results for a query - how to ascertain that the documents are indeed the most relevant ones available. There is always an inherent delay in indexing of document. Its here that the new semantic documents search engines can close delay. Experimenting with the concept of Search in the semantic web can only bore well for the future of search technology."
  13. Brambilla, M.; Ceri, S.: Designing exploratory search applications upon Web data sources (2012) 0.01
    0.008076601 = product of:
      0.0484596 = sum of:
        0.0484596 = weight(_text_:ranking in 428) [ClassicSimilarity], result of:
          0.0484596 = score(doc=428,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.23904754 = fieldWeight in 428, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03125 = fieldNorm(doc=428)
      0.16666667 = coord(1/6)
    
    Abstract
    Search is the preferred method to access information in today's computing systems. The Web, accessed through search engines, is universally recognized as the source for answering users' information needs. However, offering a link to a Web page does not cover all information needs. Even simple problems, such as "Which theater offers an at least three-stars action movie in London close to a good Italian restaurant," can only be solved by searching the Web multiple times, e.g., by extracting a list of the recent action movies filtered by ranking, then looking for movie theaters, then looking for Italian restaurants close to them. While search engines hint to useful information, the user's brain is the fundamental platform for information integration. An important trend is the availability of new, specialized data sources-the so-called "long tail" of the Web of data. Such carefully collected and curated data sources can be much more valuable than information currently available in Web pages; however, many sources remain hidden or insulated, in the lack of software solutions for bringing them to surface and making them usable in the search context. A new class of tailor-made systems, designed to satisfy the needs of users with specific aims, will support the publishing and integration of data sources for vertical domains; the user will be able to select sources based on individual or collective trust, and systems will be able to route queries to such sources and to provide easyto-use interfaces for combining them within search strategies, at the same time, rewarding the data source owners for each contribution to effective search. Efforts such as Google's Fusion Tables show that the technology for bringing hidden data sources to surface is feasible.
  14. Malmsten, M.: Making a library catalogue part of the Semantic Web (2008) 0.01
    0.007934572 = product of:
      0.047607433 = sum of:
        0.047607433 = product of:
          0.07141115 = sum of:
            0.035866898 = weight(_text_:29 in 2640) [ClassicSimilarity], result of:
              0.035866898 = score(doc=2640,freq=2.0), product of:
                0.13183585 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03747799 = queryNorm
                0.27205724 = fieldWeight in 2640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2640)
            0.03554425 = weight(_text_:22 in 2640) [ClassicSimilarity], result of:
              0.03554425 = score(doc=2640,freq=2.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.2708308 = fieldWeight in 2640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2640)
          0.6666667 = coord(2/3)
      0.16666667 = coord(1/6)
    
    Date
    20. 2.2009 10:29:39
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  15. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.01
    0.0068010613 = product of:
      0.040806368 = sum of:
        0.040806368 = product of:
          0.061209552 = sum of:
            0.030743055 = weight(_text_:29 in 4649) [ClassicSimilarity], result of:
              0.030743055 = score(doc=4649,freq=2.0), product of:
                0.13183585 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03747799 = queryNorm
                0.23319192 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
            0.030466499 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
              0.030466499 = score(doc=4649,freq=2.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.23214069 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
          0.6666667 = coord(2/3)
      0.16666667 = coord(1/6)
    
    Date
    29. 7.2011 14:44:56
    26.12.2011 13:40:22
  16. Hooland, S. van; Verborgh, R.; Wilde, M. De; Hercher, J.; Mannens, E.; Wa, R.Van de: Evaluating the success of vocabulary reconciliation for cultural heritage collections (2013) 0.01
    0.0068010613 = product of:
      0.040806368 = sum of:
        0.040806368 = product of:
          0.061209552 = sum of:
            0.030743055 = weight(_text_:29 in 662) [ClassicSimilarity], result of:
              0.030743055 = score(doc=662,freq=2.0), product of:
                0.13183585 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03747799 = queryNorm
                0.23319192 = fieldWeight in 662, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=662)
            0.030466499 = weight(_text_:22 in 662) [ClassicSimilarity], result of:
              0.030466499 = score(doc=662,freq=2.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.23214069 = fieldWeight in 662, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=662)
          0.6666667 = coord(2/3)
      0.16666667 = coord(1/6)
    
    Date
    22. 3.2013 19:29:20
  17. Vocht, L. De: Exploring semantic relationships in the Web of Data : Semantische relaties verkennen in data op het web (2017) 0.01
    0.0050478755 = product of:
      0.03028725 = sum of:
        0.03028725 = weight(_text_:ranking in 4232) [ClassicSimilarity], result of:
          0.03028725 = score(doc=4232,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.14940472 = fieldWeight in 4232, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4232)
      0.16666667 = coord(1/6)
    
    Abstract
    Our first technique focuses on the interactive visualization of search results. Linked data resources can be brought in relation with each other at will. This leads to complex and diverse graphs structures. Our technique facilitates navigation and supports a workflow starting from a broad overview on the data and allows narrowing down until the desired level of detail to then broaden again. To validate the flow, two visualizations where implemented and presented to test-users. The users judged the usability of the visualizations, how the visualizations fit in the workflow and to which degree their features seemed useful for the exploration of linked data. There is a difference in the way users interact with resources, visually or textually, and how resources are represented for machines to be processed by algorithms. This difference complicates bridging the users' intents and machine executable queries. It is important to implement this 'translation' mechanism to impact the search as favorable as possible in terms of performance, complexity and accuracy. To do this, we explain a second technique, that supports such a bridging component. Our second technique is developed around three features that support the search process: looking up, relating and ranking resources. The main goal is to ensure that resources in the results are as precise and relevant as possible. During the evaluation of this technique, we did not only look at the precision of the search results but also investigated how the effectiveness of the search evolved while the user executed certain actions sequentially.
  18. Metadata and semantics research : 7th Research Conference, MTSR 2013 Thessaloniki, Greece, November 19-22, 2013. Proceedings (2013) 0.00
    0.0047852257 = product of:
      0.028711354 = sum of:
        0.028711354 = product of:
          0.04306703 = sum of:
            0.017933449 = weight(_text_:29 in 1155) [ClassicSimilarity], result of:
              0.017933449 = score(doc=1155,freq=2.0), product of:
                0.13183585 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03747799 = queryNorm
                0.13602862 = fieldWeight in 1155, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1155)
            0.025133582 = weight(_text_:22 in 1155) [ClassicSimilarity], result of:
              0.025133582 = score(doc=1155,freq=4.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.19150631 = fieldWeight in 1155, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1155)
          0.6666667 = coord(2/3)
      0.16666667 = coord(1/6)
    
    Abstract
    All the papers underwent a thorough and rigorous peer-review process. The review and selection this year was highly competitive and only papers containing significant research results, innovative methods, or novel and best practices were accepted for publication. Only 29 of 89 submissions were accepted as full papers, representing 32.5% of the total number of submissions. Additional contributions covering noteworthy and important results in special tracks or project reports were accepted, totaling 42 accepted contributions. This year's conference included two outstanding keynote speakers. Dr. Stefan Gradmann, a professor arts department of KU Leuven (Belgium) and director of university library, addressed semantic research drawing from his work with Europeana. The title of his presentation was, "Towards a Semantic Research Library: Digital Humanities Research, Europeana and the Linked Data Paradigm". Dr. Michail Salampasis, associate professor from our conference host institution, the Department of Informatics of the Alexander TEI of Thessaloniki, presented new potential, intersecting search and linked data. The title of his talk was, "Rethinking the Search Experience: What Could Professional Search Systems Do Better?"
    Date
    17.12.2013 12:51:22
  19. Dextre Clarke, S.G.: Challenges and opportunities for KOS standards (2007) 0.00
    0.0039493614 = product of:
      0.023696167 = sum of:
        0.023696167 = product of:
          0.0710885 = sum of:
            0.0710885 = weight(_text_:22 in 4643) [ClassicSimilarity], result of:
              0.0710885 = score(doc=4643,freq=2.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.5416616 = fieldWeight in 4643, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4643)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    22. 9.2007 15:41:14
  20. Broughton, V.: Automatic metadata generation : Digital resource description without human intervention (2007) 0.00
    0.0033851666 = product of:
      0.020311 = sum of:
        0.020311 = product of:
          0.060932998 = sum of:
            0.060932998 = weight(_text_:22 in 6048) [ClassicSimilarity], result of:
              0.060932998 = score(doc=6048,freq=2.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.46428138 = fieldWeight in 6048, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6048)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    22. 9.2007 15:41:14

Authors

Languages

  • e 65
  • d 17

Types

  • a 50
  • m 18
  • el 16
  • s 11
  • x 3
  • n 1
  • More… Less…

Subjects